2026-04-09 00:00:11.092622 | Job console starting 2026-04-09 00:00:11.108799 | Updating git repos 2026-04-09 00:00:11.579726 | Cloning repos into workspace 2026-04-09 00:00:11.918487 | Restoring repo states 2026-04-09 00:00:11.966151 | Merging changes 2026-04-09 00:00:11.966174 | Checking out repos 2026-04-09 00:00:12.874614 | Preparing playbooks 2026-04-09 00:00:15.086471 | Running Ansible setup 2026-04-09 00:00:21.882500 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-09 00:00:22.523916 | 2026-04-09 00:00:22.524025 | PLAY [Base pre] 2026-04-09 00:00:22.537697 | 2026-04-09 00:00:22.537815 | TASK [Setup log path fact] 2026-04-09 00:00:22.566172 | orchestrator | ok 2026-04-09 00:00:22.581804 | 2026-04-09 00:00:22.581918 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-09 00:00:22.609775 | orchestrator | ok 2026-04-09 00:00:22.627684 | 2026-04-09 00:00:22.627814 | TASK [emit-job-header : Print job information] 2026-04-09 00:00:22.676565 | # Job Information 2026-04-09 00:00:22.676714 | Ansible Version: 2.16.14 2026-04-09 00:00:22.676764 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-09 00:00:22.676792 | Pipeline: periodic-midnight 2026-04-09 00:00:22.676810 | Executor: 521e9411259a 2026-04-09 00:00:22.676826 | Triggered by: https://github.com/osism/testbed 2026-04-09 00:00:22.676844 | Event ID: 229a3ccad3314f149ff7c6cbe4e5e7b7 2026-04-09 00:00:22.682389 | 2026-04-09 00:00:22.682481 | LOOP [emit-job-header : Print node information] 2026-04-09 00:00:23.032544 | orchestrator | ok: 2026-04-09 00:00:23.032681 | orchestrator | # Node Information 2026-04-09 00:00:23.032709 | orchestrator | Inventory Hostname: orchestrator 2026-04-09 00:00:23.032754 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-09 00:00:23.032774 | orchestrator | Username: zuul-testbed04 2026-04-09 00:00:23.032792 | orchestrator | Distro: Debian 12.13 2026-04-09 00:00:23.032812 | orchestrator | Provider: static-testbed 2026-04-09 00:00:23.032830 | orchestrator | Region: 2026-04-09 00:00:23.032848 | orchestrator | Label: testbed-orchestrator 2026-04-09 00:00:23.032863 | orchestrator | Product Name: OpenStack Nova 2026-04-09 00:00:23.032879 | orchestrator | Interface IP: 81.163.193.140 2026-04-09 00:00:23.049812 | 2026-04-09 00:00:23.049914 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-09 00:00:23.980818 | orchestrator -> localhost | changed 2026-04-09 00:00:23.987255 | 2026-04-09 00:00:23.987350 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-09 00:00:26.648722 | orchestrator -> localhost | changed 2026-04-09 00:00:26.673510 | 2026-04-09 00:00:26.673607 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-09 00:00:27.408059 | orchestrator -> localhost | ok 2026-04-09 00:00:27.413764 | 2026-04-09 00:00:27.413869 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-09 00:00:27.492276 | orchestrator | ok 2026-04-09 00:00:27.526011 | orchestrator | included: /var/lib/zuul/builds/445196435b5642fdb6b68cd895968d94/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-09 00:00:27.556650 | 2026-04-09 00:00:27.556772 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-09 00:00:31.523810 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-09 00:00:31.523977 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/445196435b5642fdb6b68cd895968d94/work/445196435b5642fdb6b68cd895968d94_id_rsa 2026-04-09 00:00:31.524009 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/445196435b5642fdb6b68cd895968d94/work/445196435b5642fdb6b68cd895968d94_id_rsa.pub 2026-04-09 00:00:31.524030 | orchestrator -> localhost | The key fingerprint is: 2026-04-09 00:00:31.524052 | orchestrator -> localhost | SHA256:L4MWPcj5C57YxzOfpZbvkrL+qVYNcmMRHY4elYBYHGI zuul-build-sshkey 2026-04-09 00:00:31.524070 | orchestrator -> localhost | The key's randomart image is: 2026-04-09 00:00:31.524095 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-09 00:00:31.524113 | orchestrator -> localhost | | E+ooo+oo | 2026-04-09 00:00:31.524131 | orchestrator -> localhost | | ...o .+o | 2026-04-09 00:00:31.524147 | orchestrator -> localhost | | o.. | 2026-04-09 00:00:31.524164 | orchestrator -> localhost | | . +..=. | 2026-04-09 00:00:31.524181 | orchestrator -> localhost | | = S+.+ | 2026-04-09 00:00:31.524203 | orchestrator -> localhost | | + o. . | 2026-04-09 00:00:31.524220 | orchestrator -> localhost | | +.+..o. | 2026-04-09 00:00:31.524236 | orchestrator -> localhost | | = oB+== | 2026-04-09 00:00:31.524253 | orchestrator -> localhost | | . ++=X*+o | 2026-04-09 00:00:31.524269 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-09 00:00:31.524310 | orchestrator -> localhost | ok: Runtime: 0:00:02.178486 2026-04-09 00:00:31.530125 | 2026-04-09 00:00:31.530206 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-09 00:00:31.567381 | orchestrator | ok 2026-04-09 00:00:31.585784 | orchestrator | included: /var/lib/zuul/builds/445196435b5642fdb6b68cd895968d94/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-09 00:00:31.628109 | 2026-04-09 00:00:31.628256 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-09 00:00:31.707550 | orchestrator | skipping: Conditional result was False 2026-04-09 00:00:31.713874 | 2026-04-09 00:00:31.713965 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-09 00:00:32.651320 | orchestrator | changed 2026-04-09 00:00:32.656382 | 2026-04-09 00:00:32.659589 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-09 00:00:32.962023 | orchestrator | ok 2026-04-09 00:00:32.967497 | 2026-04-09 00:00:32.967574 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-09 00:00:33.430112 | orchestrator | ok 2026-04-09 00:00:33.442656 | 2026-04-09 00:00:33.442783 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-09 00:00:34.031794 | orchestrator | ok 2026-04-09 00:00:34.036758 | 2026-04-09 00:00:34.036833 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-09 00:00:34.089646 | orchestrator | skipping: Conditional result was False 2026-04-09 00:00:34.095241 | 2026-04-09 00:00:34.095328 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-09 00:00:34.602687 | orchestrator -> localhost | changed 2026-04-09 00:00:34.613303 | 2026-04-09 00:00:34.613392 | TASK [add-build-sshkey : Add back temp key] 2026-04-09 00:00:35.428336 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/445196435b5642fdb6b68cd895968d94/work/445196435b5642fdb6b68cd895968d94_id_rsa (zuul-build-sshkey) 2026-04-09 00:00:35.428520 | orchestrator -> localhost | ok: Runtime: 0:00:00.023890 2026-04-09 00:00:35.434351 | 2026-04-09 00:00:35.434434 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-09 00:00:36.016811 | orchestrator | ok 2026-04-09 00:00:36.021638 | 2026-04-09 00:00:36.021716 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-09 00:00:36.066870 | orchestrator | skipping: Conditional result was False 2026-04-09 00:00:36.151540 | 2026-04-09 00:00:36.151650 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-09 00:00:36.677151 | orchestrator | ok 2026-04-09 00:00:36.701219 | 2026-04-09 00:00:36.701322 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-09 00:00:36.757994 | orchestrator | ok 2026-04-09 00:00:36.765162 | 2026-04-09 00:00:36.765269 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-09 00:00:37.579135 | orchestrator -> localhost | ok 2026-04-09 00:00:37.584958 | 2026-04-09 00:00:37.585040 | TASK [validate-host : Collect information about the host] 2026-04-09 00:00:39.146173 | orchestrator | ok 2026-04-09 00:00:39.195349 | 2026-04-09 00:00:39.202715 | TASK [validate-host : Sanitize hostname] 2026-04-09 00:00:39.367107 | orchestrator | ok 2026-04-09 00:00:39.371482 | 2026-04-09 00:00:39.371564 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-09 00:00:41.054901 | orchestrator -> localhost | changed 2026-04-09 00:00:41.060278 | 2026-04-09 00:00:41.060361 | TASK [validate-host : Collect information about zuul worker] 2026-04-09 00:00:41.766732 | orchestrator | ok 2026-04-09 00:00:41.771735 | 2026-04-09 00:00:41.771831 | TASK [validate-host : Write out all zuul information for each host] 2026-04-09 00:00:43.228329 | orchestrator -> localhost | changed 2026-04-09 00:00:43.236687 | 2026-04-09 00:00:43.236793 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-09 00:00:43.533542 | orchestrator | ok 2026-04-09 00:00:43.544718 | 2026-04-09 00:00:43.544834 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-09 00:02:02.292750 | orchestrator | changed: 2026-04-09 00:02:02.294164 | orchestrator | .d..t...... src/ 2026-04-09 00:02:02.294227 | orchestrator | .d..t...... src/github.com/ 2026-04-09 00:02:02.294254 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-09 00:02:02.294277 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-09 00:02:02.294299 | orchestrator | RedHat.yml 2026-04-09 00:02:02.311467 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-09 00:02:02.311485 | orchestrator | RedHat.yml 2026-04-09 00:02:02.311545 | orchestrator | = 1.53.0"... 2026-04-09 00:02:20.529686 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-09 00:02:20.684354 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-09 00:02:21.221770 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-09 00:02:21.540576 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-09 00:02:22.466774 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-09 00:02:22.740365 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-09 00:02:23.340579 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-09 00:02:23.340688 | orchestrator | 2026-04-09 00:02:23.340697 | orchestrator | Providers are signed by their developers. 2026-04-09 00:02:23.340702 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-09 00:02:23.340714 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-09 00:02:23.340752 | orchestrator | 2026-04-09 00:02:23.340758 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-09 00:02:23.340762 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-09 00:02:23.340785 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-09 00:02:23.340796 | orchestrator | you run "tofu init" in the future. 2026-04-09 00:02:23.341200 | orchestrator | 2026-04-09 00:02:23.341242 | orchestrator | OpenTofu has been successfully initialized! 2026-04-09 00:02:23.341266 | orchestrator | 2026-04-09 00:02:23.341271 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-09 00:02:23.341276 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-09 00:02:23.341280 | orchestrator | should now work. 2026-04-09 00:02:23.341284 | orchestrator | 2026-04-09 00:02:23.341288 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-09 00:02:23.341293 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-09 00:02:23.341308 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-09 00:02:23.505314 | orchestrator | Created and switched to workspace "ci"! 2026-04-09 00:02:23.505369 | orchestrator | 2026-04-09 00:02:23.505376 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-09 00:02:23.505382 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-09 00:02:23.505387 | orchestrator | for this configuration. 2026-04-09 00:02:23.726079 | orchestrator | ci.auto.tfvars 2026-04-09 00:02:23.726121 | orchestrator | default_custom.tf 2026-04-09 00:02:24.706572 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-09 00:02:25.744917 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-09 00:02:25.981392 | orchestrator | 2026-04-09 00:02:25.981719 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-09 00:02:25.981741 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-09 00:02:25.981753 | orchestrator | + create 2026-04-09 00:02:25.981762 | orchestrator | <= read (data resources) 2026-04-09 00:02:25.981795 | orchestrator | 2026-04-09 00:02:25.981805 | orchestrator | OpenTofu will perform the following actions: 2026-04-09 00:02:25.981815 | orchestrator | 2026-04-09 00:02:25.981824 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-09 00:02:25.981833 | orchestrator | # (config refers to values not yet known) 2026-04-09 00:02:25.981842 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-09 00:02:25.981851 | orchestrator | + checksum = (known after apply) 2026-04-09 00:02:25.981860 | orchestrator | + created_at = (known after apply) 2026-04-09 00:02:25.981870 | orchestrator | + file = (known after apply) 2026-04-09 00:02:25.981878 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.981911 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.981920 | orchestrator | + min_disk_gb = (known after apply) 2026-04-09 00:02:25.981929 | orchestrator | + min_ram_mb = (known after apply) 2026-04-09 00:02:25.981938 | orchestrator | + most_recent = true 2026-04-09 00:02:25.981947 | orchestrator | + name = (known after apply) 2026-04-09 00:02:25.981957 | orchestrator | + protected = (known after apply) 2026-04-09 00:02:25.981965 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.981978 | orchestrator | + schema = (known after apply) 2026-04-09 00:02:25.981987 | orchestrator | + size_bytes = (known after apply) 2026-04-09 00:02:25.981995 | orchestrator | + tags = (known after apply) 2026-04-09 00:02:25.982004 | orchestrator | + updated_at = (known after apply) 2026-04-09 00:02:25.982013 | orchestrator | } 2026-04-09 00:02:25.982055 | orchestrator | 2026-04-09 00:02:25.982065 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-09 00:02:25.982074 | orchestrator | # (config refers to values not yet known) 2026-04-09 00:02:25.982083 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-09 00:02:25.982092 | orchestrator | + checksum = (known after apply) 2026-04-09 00:02:25.982100 | orchestrator | + created_at = (known after apply) 2026-04-09 00:02:25.982109 | orchestrator | + file = (known after apply) 2026-04-09 00:02:25.982118 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.982126 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.982135 | orchestrator | + min_disk_gb = (known after apply) 2026-04-09 00:02:25.982143 | orchestrator | + min_ram_mb = (known after apply) 2026-04-09 00:02:25.982152 | orchestrator | + most_recent = true 2026-04-09 00:02:25.982161 | orchestrator | + name = (known after apply) 2026-04-09 00:02:25.982170 | orchestrator | + protected = (known after apply) 2026-04-09 00:02:25.982179 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.982187 | orchestrator | + schema = (known after apply) 2026-04-09 00:02:25.982196 | orchestrator | + size_bytes = (known after apply) 2026-04-09 00:02:25.982204 | orchestrator | + tags = (known after apply) 2026-04-09 00:02:25.982213 | orchestrator | + updated_at = (known after apply) 2026-04-09 00:02:25.982222 | orchestrator | } 2026-04-09 00:02:25.982231 | orchestrator | 2026-04-09 00:02:25.982239 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-09 00:02:25.982248 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-09 00:02:25.982257 | orchestrator | + content = (known after apply) 2026-04-09 00:02:25.982266 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:25.982275 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:25.982284 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:25.982292 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:25.982301 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:25.982310 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:25.982318 | orchestrator | + directory_permission = "0777" 2026-04-09 00:02:25.982327 | orchestrator | + file_permission = "0644" 2026-04-09 00:02:25.982336 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-09 00:02:25.982344 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.982353 | orchestrator | } 2026-04-09 00:02:25.982362 | orchestrator | 2026-04-09 00:02:25.982370 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-09 00:02:25.982379 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-09 00:02:25.982388 | orchestrator | + content = (known after apply) 2026-04-09 00:02:25.982397 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:25.982405 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:25.982414 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:25.982423 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:25.982431 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:25.982440 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:25.982449 | orchestrator | + directory_permission = "0777" 2026-04-09 00:02:25.982457 | orchestrator | + file_permission = "0644" 2026-04-09 00:02:25.982473 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-09 00:02:25.982482 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.982491 | orchestrator | } 2026-04-09 00:02:25.982507 | orchestrator | 2026-04-09 00:02:25.982580 | orchestrator | # local_file.inventory will be created 2026-04-09 00:02:25.982590 | orchestrator | + resource "local_file" "inventory" { 2026-04-09 00:02:25.982598 | orchestrator | + content = (known after apply) 2026-04-09 00:02:25.982607 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:25.982616 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:25.982624 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:25.982633 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:25.982642 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:25.982651 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:25.982660 | orchestrator | + directory_permission = "0777" 2026-04-09 00:02:25.982668 | orchestrator | + file_permission = "0644" 2026-04-09 00:02:25.982677 | orchestrator | + filename = "inventory.ci" 2026-04-09 00:02:25.982686 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.982695 | orchestrator | } 2026-04-09 00:02:25.982703 | orchestrator | 2026-04-09 00:02:25.982712 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-09 00:02:25.982721 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-09 00:02:25.982730 | orchestrator | + content = (sensitive value) 2026-04-09 00:02:25.982738 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:25.982747 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:25.982756 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:25.982764 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:25.982773 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:25.982782 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:25.982790 | orchestrator | + directory_permission = "0700" 2026-04-09 00:02:25.982799 | orchestrator | + file_permission = "0600" 2026-04-09 00:02:25.982807 | orchestrator | + filename = ".id_rsa.ci" 2026-04-09 00:02:25.982816 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.982825 | orchestrator | } 2026-04-09 00:02:25.982833 | orchestrator | 2026-04-09 00:02:25.982842 | orchestrator | # null_resource.node_semaphore will be created 2026-04-09 00:02:25.982851 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-09 00:02:25.982859 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.982868 | orchestrator | } 2026-04-09 00:02:25.982877 | orchestrator | 2026-04-09 00:02:25.982886 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-09 00:02:25.982894 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-09 00:02:25.982903 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.982912 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.982921 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.982929 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.982937 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.982945 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-09 00:02:25.982953 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.982961 | orchestrator | + size = 80 2026-04-09 00:02:25.982969 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.982976 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.982984 | orchestrator | } 2026-04-09 00:02:25.982992 | orchestrator | 2026-04-09 00:02:25.983000 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-09 00:02:25.983008 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:25.983016 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.983024 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.983032 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.983046 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.983054 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.983062 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-09 00:02:25.983070 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.983078 | orchestrator | + size = 80 2026-04-09 00:02:25.983086 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.983094 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.983101 | orchestrator | } 2026-04-09 00:02:25.983109 | orchestrator | 2026-04-09 00:02:25.983117 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-09 00:02:25.983125 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:25.983133 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.983141 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.983149 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.983156 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.983164 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.983172 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-09 00:02:25.983180 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.983188 | orchestrator | + size = 80 2026-04-09 00:02:25.983196 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.983204 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.983212 | orchestrator | } 2026-04-09 00:02:25.983220 | orchestrator | 2026-04-09 00:02:25.983228 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-09 00:02:25.983236 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:25.983243 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.983251 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.983259 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.983267 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.983275 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.983283 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-09 00:02:25.983291 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.983299 | orchestrator | + size = 80 2026-04-09 00:02:25.983307 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.983315 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.983322 | orchestrator | } 2026-04-09 00:02:25.983330 | orchestrator | 2026-04-09 00:02:25.983338 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-09 00:02:25.983346 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:25.983354 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.983362 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.983376 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.983384 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.983392 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.983404 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-09 00:02:25.983413 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.983421 | orchestrator | + size = 80 2026-04-09 00:02:25.983429 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.983436 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.983444 | orchestrator | } 2026-04-09 00:02:25.983452 | orchestrator | 2026-04-09 00:02:25.983460 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-09 00:02:25.983468 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:25.983476 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.983484 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.983492 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.983506 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.983528 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.983537 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-09 00:02:25.983544 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.983552 | orchestrator | + size = 80 2026-04-09 00:02:25.983560 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.983568 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.983576 | orchestrator | } 2026-04-09 00:02:25.983584 | orchestrator | 2026-04-09 00:02:25.983592 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-09 00:02:25.983600 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:25.983608 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.983616 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.983623 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.983631 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.983639 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.983647 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-09 00:02:25.983655 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.983663 | orchestrator | + size = 80 2026-04-09 00:02:25.983671 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.983679 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.983687 | orchestrator | } 2026-04-09 00:02:25.983695 | orchestrator | 2026-04-09 00:02:25.983702 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-09 00:02:25.983711 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:25.983719 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.983726 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.983734 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.983742 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.983751 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-09 00:02:25.983759 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.983766 | orchestrator | + size = 20 2026-04-09 00:02:25.983774 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.983783 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.983790 | orchestrator | } 2026-04-09 00:02:25.983798 | orchestrator | 2026-04-09 00:02:25.983806 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-09 00:02:25.983814 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:25.983822 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.983830 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.983838 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.983846 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.983854 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-09 00:02:25.983862 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.983869 | orchestrator | + size = 20 2026-04-09 00:02:25.983877 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.983885 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.983893 | orchestrator | } 2026-04-09 00:02:25.983901 | orchestrator | 2026-04-09 00:02:25.983909 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-09 00:02:25.983917 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:25.983925 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.983933 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.983941 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.983949 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.983957 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-09 00:02:25.983965 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.983977 | orchestrator | + size = 20 2026-04-09 00:02:25.983986 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.983994 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.984002 | orchestrator | } 2026-04-09 00:02:25.984009 | orchestrator | 2026-04-09 00:02:25.984017 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-09 00:02:25.984025 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:25.984033 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.984041 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.984049 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.984057 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.984065 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-09 00:02:25.984073 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.984081 | orchestrator | + size = 20 2026-04-09 00:02:25.984089 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.984097 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.984104 | orchestrator | } 2026-04-09 00:02:25.984112 | orchestrator | 2026-04-09 00:02:25.984120 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-09 00:02:25.984128 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:25.984136 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.984144 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.984152 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.984160 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.984172 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-09 00:02:25.984181 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.984202 | orchestrator | + size = 20 2026-04-09 00:02:25.984210 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.984218 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.984226 | orchestrator | } 2026-04-09 00:02:25.984234 | orchestrator | 2026-04-09 00:02:25.984242 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-09 00:02:25.984250 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:25.984258 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.984266 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.984274 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.984282 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.984290 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-09 00:02:25.984298 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.984306 | orchestrator | + size = 20 2026-04-09 00:02:25.984313 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.984321 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.984329 | orchestrator | } 2026-04-09 00:02:25.984337 | orchestrator | 2026-04-09 00:02:25.984345 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-09 00:02:25.984353 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:25.984361 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.984369 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.984377 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.984385 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.984392 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-09 00:02:25.984400 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.984408 | orchestrator | + size = 20 2026-04-09 00:02:25.984416 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.984424 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.984432 | orchestrator | } 2026-04-09 00:02:25.984440 | orchestrator | 2026-04-09 00:02:25.984448 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-09 00:02:25.984456 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:25.984469 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.984477 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.984485 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.984492 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.984500 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-09 00:02:25.984521 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.984529 | orchestrator | + size = 20 2026-04-09 00:02:25.984537 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.984545 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.984554 | orchestrator | } 2026-04-09 00:02:25.984562 | orchestrator | 2026-04-09 00:02:25.984570 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-09 00:02:25.984577 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:25.984585 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:25.984593 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.984601 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.984609 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:25.984617 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-09 00:02:25.984625 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.984634 | orchestrator | + size = 20 2026-04-09 00:02:25.984642 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:25.984650 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:25.984657 | orchestrator | } 2026-04-09 00:02:25.984665 | orchestrator | 2026-04-09 00:02:25.984673 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-09 00:02:25.984682 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-09 00:02:25.984690 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:25.984698 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:25.984706 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:25.984714 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.984722 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.984730 | orchestrator | + config_drive = true 2026-04-09 00:02:25.984737 | orchestrator | + created = (known after apply) 2026-04-09 00:02:25.984745 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:25.984753 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-09 00:02:25.984761 | orchestrator | + force_delete = false 2026-04-09 00:02:25.984769 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:25.984777 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.984785 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.984793 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:25.984801 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:25.984809 | orchestrator | + name = "testbed-manager" 2026-04-09 00:02:25.984817 | orchestrator | + power_state = "active" 2026-04-09 00:02:25.984825 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.984833 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:25.984841 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:25.984849 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:25.984856 | orchestrator | + user_data = (sensitive value) 2026-04-09 00:02:25.984864 | orchestrator | 2026-04-09 00:02:25.984873 | orchestrator | + block_device { 2026-04-09 00:02:25.984881 | orchestrator | + boot_index = 0 2026-04-09 00:02:25.984889 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:25.984900 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:25.984909 | orchestrator | + multiattach = false 2026-04-09 00:02:25.984917 | orchestrator | + source_type = "volume" 2026-04-09 00:02:25.984925 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.984938 | orchestrator | } 2026-04-09 00:02:25.984946 | orchestrator | 2026-04-09 00:02:25.984954 | orchestrator | + network { 2026-04-09 00:02:25.984962 | orchestrator | + access_network = false 2026-04-09 00:02:25.984970 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:25.984978 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:25.984986 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:25.984998 | orchestrator | + name = (known after apply) 2026-04-09 00:02:25.985007 | orchestrator | + port = (known after apply) 2026-04-09 00:02:25.985015 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.985023 | orchestrator | } 2026-04-09 00:02:25.985031 | orchestrator | } 2026-04-09 00:02:25.985039 | orchestrator | 2026-04-09 00:02:25.985047 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-09 00:02:25.985055 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:25.985063 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:25.985071 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:25.985079 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:25.985087 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.985094 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.985102 | orchestrator | + config_drive = true 2026-04-09 00:02:25.985110 | orchestrator | + created = (known after apply) 2026-04-09 00:02:25.985118 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:25.985126 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:25.985134 | orchestrator | + force_delete = false 2026-04-09 00:02:25.985142 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:25.985150 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.985158 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.985166 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:25.985174 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:25.985182 | orchestrator | + name = "testbed-node-0" 2026-04-09 00:02:25.985190 | orchestrator | + power_state = "active" 2026-04-09 00:02:25.985198 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.985206 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:25.985214 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:25.985222 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:25.985230 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:25.985238 | orchestrator | 2026-04-09 00:02:25.985246 | orchestrator | + block_device { 2026-04-09 00:02:25.985254 | orchestrator | + boot_index = 0 2026-04-09 00:02:25.985262 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:25.985270 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:25.985277 | orchestrator | + multiattach = false 2026-04-09 00:02:25.985285 | orchestrator | + source_type = "volume" 2026-04-09 00:02:25.985293 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.985301 | orchestrator | } 2026-04-09 00:02:25.985309 | orchestrator | 2026-04-09 00:02:25.985317 | orchestrator | + network { 2026-04-09 00:02:25.985325 | orchestrator | + access_network = false 2026-04-09 00:02:25.985333 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:25.985341 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:25.985349 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:25.985357 | orchestrator | + name = (known after apply) 2026-04-09 00:02:25.985365 | orchestrator | + port = (known after apply) 2026-04-09 00:02:25.985373 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.985381 | orchestrator | } 2026-04-09 00:02:25.985389 | orchestrator | } 2026-04-09 00:02:25.985397 | orchestrator | 2026-04-09 00:02:25.985405 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-09 00:02:25.985413 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:25.985421 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:25.985434 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:25.985442 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:25.985449 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.985458 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.985465 | orchestrator | + config_drive = true 2026-04-09 00:02:25.985473 | orchestrator | + created = (known after apply) 2026-04-09 00:02:25.985481 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:25.985489 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:25.985497 | orchestrator | + force_delete = false 2026-04-09 00:02:25.985505 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:25.985530 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.985538 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.985546 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:25.985554 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:25.985561 | orchestrator | + name = "testbed-node-1" 2026-04-09 00:02:25.985569 | orchestrator | + power_state = "active" 2026-04-09 00:02:25.985577 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.985585 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:25.985593 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:25.985601 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:25.985609 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:25.985617 | orchestrator | 2026-04-09 00:02:25.985625 | orchestrator | + block_device { 2026-04-09 00:02:25.985633 | orchestrator | + boot_index = 0 2026-04-09 00:02:25.985641 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:25.985648 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:25.985656 | orchestrator | + multiattach = false 2026-04-09 00:02:25.985664 | orchestrator | + source_type = "volume" 2026-04-09 00:02:25.985672 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.985681 | orchestrator | } 2026-04-09 00:02:25.985688 | orchestrator | 2026-04-09 00:02:25.985696 | orchestrator | + network { 2026-04-09 00:02:25.985704 | orchestrator | + access_network = false 2026-04-09 00:02:25.985712 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:25.985720 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:25.985728 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:25.985736 | orchestrator | + name = (known after apply) 2026-04-09 00:02:25.985744 | orchestrator | + port = (known after apply) 2026-04-09 00:02:25.985752 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.985760 | orchestrator | } 2026-04-09 00:02:25.985768 | orchestrator | } 2026-04-09 00:02:25.985776 | orchestrator | 2026-04-09 00:02:25.985784 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-09 00:02:25.985792 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:25.985800 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:25.985808 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:25.985816 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:25.985828 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.985840 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.985849 | orchestrator | + config_drive = true 2026-04-09 00:02:25.985857 | orchestrator | + created = (known after apply) 2026-04-09 00:02:25.985865 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:25.985873 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:25.985880 | orchestrator | + force_delete = false 2026-04-09 00:02:25.985888 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:25.985896 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.985904 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.985917 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:25.985925 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:25.985933 | orchestrator | + name = "testbed-node-2" 2026-04-09 00:02:25.985941 | orchestrator | + power_state = "active" 2026-04-09 00:02:25.985949 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.985956 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:25.985964 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:25.985972 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:25.985980 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:25.985988 | orchestrator | 2026-04-09 00:02:25.985996 | orchestrator | + block_device { 2026-04-09 00:02:25.986004 | orchestrator | + boot_index = 0 2026-04-09 00:02:25.986012 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:25.986052 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:25.986060 | orchestrator | + multiattach = false 2026-04-09 00:02:25.986068 | orchestrator | + source_type = "volume" 2026-04-09 00:02:25.986076 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.986084 | orchestrator | } 2026-04-09 00:02:25.986092 | orchestrator | 2026-04-09 00:02:25.986100 | orchestrator | + network { 2026-04-09 00:02:25.986108 | orchestrator | + access_network = false 2026-04-09 00:02:25.986116 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:25.986123 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:25.986131 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:25.986139 | orchestrator | + name = (known after apply) 2026-04-09 00:02:25.986147 | orchestrator | + port = (known after apply) 2026-04-09 00:02:25.986155 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.986163 | orchestrator | } 2026-04-09 00:02:25.986171 | orchestrator | } 2026-04-09 00:02:25.986179 | orchestrator | 2026-04-09 00:02:25.986187 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-09 00:02:25.986195 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:25.986203 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:25.986211 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:25.986219 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:25.986227 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.986235 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.986243 | orchestrator | + config_drive = true 2026-04-09 00:02:25.986251 | orchestrator | + created = (known after apply) 2026-04-09 00:02:25.986259 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:25.986266 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:25.986274 | orchestrator | + force_delete = false 2026-04-09 00:02:25.986282 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:25.986290 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.986298 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.986306 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:25.986313 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:25.986321 | orchestrator | + name = "testbed-node-3" 2026-04-09 00:02:25.986329 | orchestrator | + power_state = "active" 2026-04-09 00:02:25.986337 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.986345 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:25.986353 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:25.986360 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:25.986368 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:25.986376 | orchestrator | 2026-04-09 00:02:25.986384 | orchestrator | + block_device { 2026-04-09 00:02:25.986397 | orchestrator | + boot_index = 0 2026-04-09 00:02:25.986405 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:25.986412 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:25.986425 | orchestrator | + multiattach = false 2026-04-09 00:02:25.986433 | orchestrator | + source_type = "volume" 2026-04-09 00:02:25.986441 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.986449 | orchestrator | } 2026-04-09 00:02:25.986457 | orchestrator | 2026-04-09 00:02:25.986465 | orchestrator | + network { 2026-04-09 00:02:25.986473 | orchestrator | + access_network = false 2026-04-09 00:02:25.986481 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:25.986489 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:25.986497 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:25.986505 | orchestrator | + name = (known after apply) 2026-04-09 00:02:25.986543 | orchestrator | + port = (known after apply) 2026-04-09 00:02:25.986552 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.986560 | orchestrator | } 2026-04-09 00:02:25.986568 | orchestrator | } 2026-04-09 00:02:25.986576 | orchestrator | 2026-04-09 00:02:25.986583 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-09 00:02:25.986592 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:25.986600 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:25.986608 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:25.986616 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:25.986623 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.986631 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.986639 | orchestrator | + config_drive = true 2026-04-09 00:02:25.986647 | orchestrator | + created = (known after apply) 2026-04-09 00:02:25.986655 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:25.986663 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:25.986671 | orchestrator | + force_delete = false 2026-04-09 00:02:25.986679 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:25.986687 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.986695 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.986707 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:25.986715 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:25.986723 | orchestrator | + name = "testbed-node-4" 2026-04-09 00:02:25.986731 | orchestrator | + power_state = "active" 2026-04-09 00:02:25.986739 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.986747 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:25.986755 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:25.986763 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:25.986771 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:25.986779 | orchestrator | 2026-04-09 00:02:25.986787 | orchestrator | + block_device { 2026-04-09 00:02:25.986795 | orchestrator | + boot_index = 0 2026-04-09 00:02:25.986803 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:25.986811 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:25.986819 | orchestrator | + multiattach = false 2026-04-09 00:02:25.986827 | orchestrator | + source_type = "volume" 2026-04-09 00:02:25.986834 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.986842 | orchestrator | } 2026-04-09 00:02:25.986850 | orchestrator | 2026-04-09 00:02:25.986858 | orchestrator | + network { 2026-04-09 00:02:25.986866 | orchestrator | + access_network = false 2026-04-09 00:02:25.986874 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:25.986882 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:25.986890 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:25.986898 | orchestrator | + name = (known after apply) 2026-04-09 00:02:25.986906 | orchestrator | + port = (known after apply) 2026-04-09 00:02:25.986914 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.986922 | orchestrator | } 2026-04-09 00:02:25.986930 | orchestrator | } 2026-04-09 00:02:25.986943 | orchestrator | 2026-04-09 00:02:25.986951 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-09 00:02:25.986959 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:25.986967 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:25.986975 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:25.986983 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:25.986991 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.986999 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:25.987007 | orchestrator | + config_drive = true 2026-04-09 00:02:25.987015 | orchestrator | + created = (known after apply) 2026-04-09 00:02:25.987022 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:25.987031 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:25.987037 | orchestrator | + force_delete = false 2026-04-09 00:02:25.987047 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:25.987054 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987061 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:25.987068 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:25.987075 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:25.987081 | orchestrator | + name = "testbed-node-5" 2026-04-09 00:02:25.987088 | orchestrator | + power_state = "active" 2026-04-09 00:02:25.987095 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987102 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:25.987109 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:25.987115 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:25.987122 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:25.987129 | orchestrator | 2026-04-09 00:02:25.987136 | orchestrator | + block_device { 2026-04-09 00:02:25.987143 | orchestrator | + boot_index = 0 2026-04-09 00:02:25.987149 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:25.987156 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:25.987163 | orchestrator | + multiattach = false 2026-04-09 00:02:25.987170 | orchestrator | + source_type = "volume" 2026-04-09 00:02:25.987176 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.987183 | orchestrator | } 2026-04-09 00:02:25.987190 | orchestrator | 2026-04-09 00:02:25.987197 | orchestrator | + network { 2026-04-09 00:02:25.987203 | orchestrator | + access_network = false 2026-04-09 00:02:25.987210 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:25.987217 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:25.987224 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:25.987230 | orchestrator | + name = (known after apply) 2026-04-09 00:02:25.987237 | orchestrator | + port = (known after apply) 2026-04-09 00:02:25.987244 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:25.987251 | orchestrator | } 2026-04-09 00:02:25.987258 | orchestrator | } 2026-04-09 00:02:25.987264 | orchestrator | 2026-04-09 00:02:25.987271 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-09 00:02:25.987278 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-09 00:02:25.987285 | orchestrator | + fingerprint = (known after apply) 2026-04-09 00:02:25.987291 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987298 | orchestrator | + name = "testbed" 2026-04-09 00:02:25.987305 | orchestrator | + private_key = (sensitive value) 2026-04-09 00:02:25.987311 | orchestrator | + public_key = (known after apply) 2026-04-09 00:02:25.987318 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987325 | orchestrator | + user_id = (known after apply) 2026-04-09 00:02:25.987332 | orchestrator | } 2026-04-09 00:02:25.987338 | orchestrator | 2026-04-09 00:02:25.987345 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-09 00:02:25.987352 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:25.987363 | orchestrator | + device = (known after apply) 2026-04-09 00:02:25.987370 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987377 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:25.987383 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987390 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:25.987397 | orchestrator | } 2026-04-09 00:02:25.987404 | orchestrator | 2026-04-09 00:02:25.987410 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-09 00:02:25.987417 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:25.987424 | orchestrator | + device = (known after apply) 2026-04-09 00:02:25.987431 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987438 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:25.987444 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987451 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:25.987458 | orchestrator | } 2026-04-09 00:02:25.987464 | orchestrator | 2026-04-09 00:02:25.987474 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-09 00:02:25.987484 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:25.987494 | orchestrator | + device = (known after apply) 2026-04-09 00:02:25.987506 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987545 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:25.987552 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987558 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:25.987565 | orchestrator | } 2026-04-09 00:02:25.987572 | orchestrator | 2026-04-09 00:02:25.987579 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-09 00:02:25.987586 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:25.987592 | orchestrator | + device = (known after apply) 2026-04-09 00:02:25.987599 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987605 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:25.987612 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987619 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:25.987626 | orchestrator | } 2026-04-09 00:02:25.987632 | orchestrator | 2026-04-09 00:02:25.987639 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-09 00:02:25.987646 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:25.987653 | orchestrator | + device = (known after apply) 2026-04-09 00:02:25.987660 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987666 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:25.987677 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987684 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:25.987690 | orchestrator | } 2026-04-09 00:02:25.987697 | orchestrator | 2026-04-09 00:02:25.987704 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-09 00:02:25.987711 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:25.987717 | orchestrator | + device = (known after apply) 2026-04-09 00:02:25.987724 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987731 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:25.987738 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987744 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:25.987750 | orchestrator | } 2026-04-09 00:02:25.987757 | orchestrator | 2026-04-09 00:02:25.987763 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-09 00:02:25.987769 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:25.987775 | orchestrator | + device = (known after apply) 2026-04-09 00:02:25.987782 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987788 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:25.987794 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987805 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:25.987811 | orchestrator | } 2026-04-09 00:02:25.987817 | orchestrator | 2026-04-09 00:02:25.987824 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-09 00:02:25.987830 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:25.987836 | orchestrator | + device = (known after apply) 2026-04-09 00:02:25.987842 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987849 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:25.987855 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987861 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:25.987867 | orchestrator | } 2026-04-09 00:02:25.987874 | orchestrator | 2026-04-09 00:02:25.987880 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-09 00:02:25.987886 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:25.987893 | orchestrator | + device = (known after apply) 2026-04-09 00:02:25.987899 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987905 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:25.987911 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.987918 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:25.987924 | orchestrator | } 2026-04-09 00:02:25.987930 | orchestrator | 2026-04-09 00:02:25.987936 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-09 00:02:25.987945 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-09 00:02:25.987954 | orchestrator | + fixed_ip = (known after apply) 2026-04-09 00:02:25.987964 | orchestrator | + floating_ip = (known after apply) 2026-04-09 00:02:25.987975 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.987984 | orchestrator | + port_id = (known after apply) 2026-04-09 00:02:25.987994 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.988004 | orchestrator | } 2026-04-09 00:02:25.988013 | orchestrator | 2026-04-09 00:02:25.988023 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-09 00:02:25.988033 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-09 00:02:25.988043 | orchestrator | + address = (known after apply) 2026-04-09 00:02:25.988053 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.988063 | orchestrator | + dns_domain = (known after apply) 2026-04-09 00:02:25.988071 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:25.988081 | orchestrator | + fixed_ip = (known after apply) 2026-04-09 00:02:25.988091 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.988100 | orchestrator | + pool = "public" 2026-04-09 00:02:25.988111 | orchestrator | + port_id = (known after apply) 2026-04-09 00:02:25.988120 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.988131 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:25.988141 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.988153 | orchestrator | } 2026-04-09 00:02:25.988159 | orchestrator | 2026-04-09 00:02:25.988166 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-09 00:02:25.988172 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-09 00:02:25.988178 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:25.988184 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.988191 | orchestrator | + availability_zone_hints = [ 2026-04-09 00:02:25.988197 | orchestrator | + "nova", 2026-04-09 00:02:25.988203 | orchestrator | ] 2026-04-09 00:02:25.988209 | orchestrator | + dns_domain = (known after apply) 2026-04-09 00:02:25.988215 | orchestrator | + external = (known after apply) 2026-04-09 00:02:25.988222 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.988232 | orchestrator | + mtu = (known after apply) 2026-04-09 00:02:25.988239 | orchestrator | + name = "net-testbed-management" 2026-04-09 00:02:25.988245 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:25.988257 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:25.988263 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.988269 | orchestrator | + shared = (known after apply) 2026-04-09 00:02:25.988276 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.988282 | orchestrator | + transparent_vlan = (known after apply) 2026-04-09 00:02:25.988288 | orchestrator | 2026-04-09 00:02:25.988294 | orchestrator | + segments (known after apply) 2026-04-09 00:02:25.988301 | orchestrator | } 2026-04-09 00:02:25.988307 | orchestrator | 2026-04-09 00:02:25.988313 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-09 00:02:25.988320 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-09 00:02:25.988326 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:25.988332 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:25.988338 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:25.988351 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.988358 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:25.988364 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:25.988370 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:25.988377 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:25.988383 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.988389 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:25.988395 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:25.988402 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:25.988408 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:25.988414 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.988420 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:25.988426 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.988433 | orchestrator | 2026-04-09 00:02:25.988439 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.988445 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:25.988452 | orchestrator | } 2026-04-09 00:02:25.988458 | orchestrator | 2026-04-09 00:02:25.988464 | orchestrator | + binding (known after apply) 2026-04-09 00:02:25.988471 | orchestrator | 2026-04-09 00:02:25.988477 | orchestrator | + fixed_ip { 2026-04-09 00:02:25.988483 | orchestrator | + ip_address = "192.168.16.5" 2026-04-09 00:02:25.988490 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:25.988496 | orchestrator | } 2026-04-09 00:02:25.988502 | orchestrator | } 2026-04-09 00:02:25.988523 | orchestrator | 2026-04-09 00:02:25.988530 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-09 00:02:25.988536 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:25.988543 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:25.988549 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:25.988555 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:25.988562 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.988568 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:25.988574 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:25.988580 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:25.988586 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:25.988592 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.988599 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:25.988605 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:25.988611 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:25.988617 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:25.988623 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.988634 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:25.988640 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.988646 | orchestrator | 2026-04-09 00:02:25.988652 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.988659 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:25.988665 | orchestrator | } 2026-04-09 00:02:25.988671 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.988677 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:25.988684 | orchestrator | } 2026-04-09 00:02:25.988690 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.988696 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:25.988702 | orchestrator | } 2026-04-09 00:02:25.988708 | orchestrator | 2026-04-09 00:02:25.988715 | orchestrator | + binding (known after apply) 2026-04-09 00:02:25.988721 | orchestrator | 2026-04-09 00:02:25.988727 | orchestrator | + fixed_ip { 2026-04-09 00:02:25.988733 | orchestrator | + ip_address = "192.168.16.10" 2026-04-09 00:02:25.988739 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:25.988746 | orchestrator | } 2026-04-09 00:02:25.988752 | orchestrator | } 2026-04-09 00:02:25.988758 | orchestrator | 2026-04-09 00:02:25.988764 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-09 00:02:25.988770 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:25.988777 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:25.988783 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:25.988789 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:25.988795 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.988802 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:25.988808 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:25.988814 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:25.988820 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:25.988826 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.988833 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:25.988839 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:25.988845 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:25.988851 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:25.988857 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.988864 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:25.988870 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.988876 | orchestrator | 2026-04-09 00:02:25.988889 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.988896 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:25.988902 | orchestrator | } 2026-04-09 00:02:25.988908 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.988914 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:25.988921 | orchestrator | } 2026-04-09 00:02:25.988927 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.988933 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:25.988939 | orchestrator | } 2026-04-09 00:02:25.988945 | orchestrator | 2026-04-09 00:02:25.988951 | orchestrator | + binding (known after apply) 2026-04-09 00:02:25.988958 | orchestrator | 2026-04-09 00:02:25.988964 | orchestrator | + fixed_ip { 2026-04-09 00:02:25.988970 | orchestrator | + ip_address = "192.168.16.11" 2026-04-09 00:02:25.988976 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:25.988983 | orchestrator | } 2026-04-09 00:02:25.988989 | orchestrator | } 2026-04-09 00:02:25.988995 | orchestrator | 2026-04-09 00:02:25.989001 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-09 00:02:25.989007 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:25.989014 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:25.989020 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:25.989026 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:25.989033 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.989043 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:25.989049 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:25.989056 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:25.989062 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:25.989071 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.989078 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:25.989084 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:25.989090 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:25.989096 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:25.989102 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.989109 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:25.989115 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.989121 | orchestrator | 2026-04-09 00:02:25.989127 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989134 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:25.989140 | orchestrator | } 2026-04-09 00:02:25.989146 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989152 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:25.989159 | orchestrator | } 2026-04-09 00:02:25.989165 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989171 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:25.989177 | orchestrator | } 2026-04-09 00:02:25.989183 | orchestrator | 2026-04-09 00:02:25.989190 | orchestrator | + binding (known after apply) 2026-04-09 00:02:25.989196 | orchestrator | 2026-04-09 00:02:25.989202 | orchestrator | + fixed_ip { 2026-04-09 00:02:25.989208 | orchestrator | + ip_address = "192.168.16.12" 2026-04-09 00:02:25.989215 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:25.989221 | orchestrator | } 2026-04-09 00:02:25.989227 | orchestrator | } 2026-04-09 00:02:25.989233 | orchestrator | 2026-04-09 00:02:25.989239 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-09 00:02:25.989246 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:25.989252 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:25.989258 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:25.989264 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:25.989271 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.989277 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:25.989283 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:25.989289 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:25.989296 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:25.989302 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.989308 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:25.989314 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:25.989320 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:25.989326 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:25.989333 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.989339 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:25.989345 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.989351 | orchestrator | 2026-04-09 00:02:25.989357 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989364 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:25.989370 | orchestrator | } 2026-04-09 00:02:25.989376 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989382 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:25.989389 | orchestrator | } 2026-04-09 00:02:25.989395 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989401 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:25.989407 | orchestrator | } 2026-04-09 00:02:25.989413 | orchestrator | 2026-04-09 00:02:25.989423 | orchestrator | + binding (known after apply) 2026-04-09 00:02:25.989430 | orchestrator | 2026-04-09 00:02:25.989436 | orchestrator | + fixed_ip { 2026-04-09 00:02:25.989442 | orchestrator | + ip_address = "192.168.16.13" 2026-04-09 00:02:25.989448 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:25.989455 | orchestrator | } 2026-04-09 00:02:25.989461 | orchestrator | } 2026-04-09 00:02:25.989467 | orchestrator | 2026-04-09 00:02:25.989474 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-09 00:02:25.989480 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:25.989486 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:25.989492 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:25.989499 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:25.989505 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.989525 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:25.989531 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:25.989537 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:25.989543 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:25.989549 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.989556 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:25.989562 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:25.989568 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:25.989574 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:25.989584 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.989591 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:25.989597 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.989604 | orchestrator | 2026-04-09 00:02:25.989610 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989617 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:25.989623 | orchestrator | } 2026-04-09 00:02:25.989629 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989635 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:25.989641 | orchestrator | } 2026-04-09 00:02:25.989648 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989654 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:25.989660 | orchestrator | } 2026-04-09 00:02:25.989666 | orchestrator | 2026-04-09 00:02:25.989673 | orchestrator | + binding (known after apply) 2026-04-09 00:02:25.989679 | orchestrator | 2026-04-09 00:02:25.989685 | orchestrator | + fixed_ip { 2026-04-09 00:02:25.989691 | orchestrator | + ip_address = "192.168.16.14" 2026-04-09 00:02:25.989697 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:25.989704 | orchestrator | } 2026-04-09 00:02:25.989710 | orchestrator | } 2026-04-09 00:02:25.989716 | orchestrator | 2026-04-09 00:02:25.989722 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-09 00:02:25.989729 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:25.989735 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:25.989741 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:25.989748 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:25.989754 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.989760 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:25.989766 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:25.989772 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:25.989779 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:25.989785 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.989791 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:25.989797 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:25.989803 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:25.989810 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:25.989821 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.989827 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:25.989834 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.989840 | orchestrator | 2026-04-09 00:02:25.989846 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989853 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:25.989859 | orchestrator | } 2026-04-09 00:02:25.989865 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989871 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:25.989878 | orchestrator | } 2026-04-09 00:02:25.989884 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:25.989890 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:25.989896 | orchestrator | } 2026-04-09 00:02:25.989903 | orchestrator | 2026-04-09 00:02:25.989912 | orchestrator | + binding (known after apply) 2026-04-09 00:02:25.989919 | orchestrator | 2026-04-09 00:02:25.989925 | orchestrator | + fixed_ip { 2026-04-09 00:02:25.989931 | orchestrator | + ip_address = "192.168.16.15" 2026-04-09 00:02:25.989937 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:25.989944 | orchestrator | } 2026-04-09 00:02:25.989950 | orchestrator | } 2026-04-09 00:02:25.989956 | orchestrator | 2026-04-09 00:02:25.989962 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-09 00:02:25.989969 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-09 00:02:25.989975 | orchestrator | + force_destroy = false 2026-04-09 00:02:25.989981 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.989987 | orchestrator | + port_id = (known after apply) 2026-04-09 00:02:25.989993 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990000 | orchestrator | + router_id = (known after apply) 2026-04-09 00:02:25.990006 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:25.990028 | orchestrator | } 2026-04-09 00:02:25.990036 | orchestrator | 2026-04-09 00:02:25.990042 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-09 00:02:25.990048 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-09 00:02:25.990055 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:25.990061 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.990067 | orchestrator | + availability_zone_hints = [ 2026-04-09 00:02:25.990073 | orchestrator | + "nova", 2026-04-09 00:02:25.990080 | orchestrator | ] 2026-04-09 00:02:25.990086 | orchestrator | + distributed = (known after apply) 2026-04-09 00:02:25.990092 | orchestrator | + enable_snat = (known after apply) 2026-04-09 00:02:25.990099 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-09 00:02:25.990105 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-09 00:02:25.990111 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.990117 | orchestrator | + name = "testbed" 2026-04-09 00:02:25.990124 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990130 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.990136 | orchestrator | 2026-04-09 00:02:25.990143 | orchestrator | + external_fixed_ip (known after apply) 2026-04-09 00:02:25.990149 | orchestrator | } 2026-04-09 00:02:25.990155 | orchestrator | 2026-04-09 00:02:25.990161 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-09 00:02:25.990168 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-09 00:02:25.990174 | orchestrator | + description = "ssh" 2026-04-09 00:02:25.990180 | orchestrator | + direction = "ingress" 2026-04-09 00:02:25.990186 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:25.990192 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.990199 | orchestrator | + port_range_max = 22 2026-04-09 00:02:25.990205 | orchestrator | + port_range_min = 22 2026-04-09 00:02:25.990211 | orchestrator | + protocol = "tcp" 2026-04-09 00:02:25.990217 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990227 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:25.990234 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:25.990240 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:25.990246 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:25.990252 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.990259 | orchestrator | } 2026-04-09 00:02:25.990265 | orchestrator | 2026-04-09 00:02:25.990271 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-09 00:02:25.990281 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-09 00:02:25.990288 | orchestrator | + description = "wireguard" 2026-04-09 00:02:25.990294 | orchestrator | + direction = "ingress" 2026-04-09 00:02:25.990300 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:25.990306 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.990313 | orchestrator | + port_range_max = 51820 2026-04-09 00:02:25.990319 | orchestrator | + port_range_min = 51820 2026-04-09 00:02:25.990325 | orchestrator | + protocol = "udp" 2026-04-09 00:02:25.990331 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990337 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:25.990344 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:25.990350 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:25.990356 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:25.990362 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.990369 | orchestrator | } 2026-04-09 00:02:25.990375 | orchestrator | 2026-04-09 00:02:25.990381 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-09 00:02:25.990388 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-09 00:02:25.990394 | orchestrator | + direction = "ingress" 2026-04-09 00:02:25.990400 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:25.990406 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.990412 | orchestrator | + protocol = "tcp" 2026-04-09 00:02:25.990419 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990425 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:25.990431 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:25.990437 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-09 00:02:25.990444 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:25.990450 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.990456 | orchestrator | } 2026-04-09 00:02:25.990462 | orchestrator | 2026-04-09 00:02:25.990468 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-09 00:02:25.990475 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-09 00:02:25.990481 | orchestrator | + direction = "ingress" 2026-04-09 00:02:25.990487 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:25.990493 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.990500 | orchestrator | + protocol = "udp" 2026-04-09 00:02:25.990506 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990544 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:25.990551 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:25.990557 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-09 00:02:25.990563 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:25.990569 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.990576 | orchestrator | } 2026-04-09 00:02:25.990582 | orchestrator | 2026-04-09 00:02:25.990588 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-09 00:02:25.990599 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-09 00:02:25.990604 | orchestrator | + direction = "ingress" 2026-04-09 00:02:25.990609 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:25.990615 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.990620 | orchestrator | + protocol = "icmp" 2026-04-09 00:02:25.990626 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990631 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:25.990637 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:25.990642 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:25.990648 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:25.990653 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.990658 | orchestrator | } 2026-04-09 00:02:25.990664 | orchestrator | 2026-04-09 00:02:25.990669 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-09 00:02:25.990675 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-09 00:02:25.990680 | orchestrator | + direction = "ingress" 2026-04-09 00:02:25.990686 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:25.990691 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.990697 | orchestrator | + protocol = "tcp" 2026-04-09 00:02:25.990703 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990708 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:25.990717 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:25.990723 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:25.990728 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:25.990734 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.990739 | orchestrator | } 2026-04-09 00:02:25.990745 | orchestrator | 2026-04-09 00:02:25.990750 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-09 00:02:25.990755 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-09 00:02:25.990761 | orchestrator | + direction = "ingress" 2026-04-09 00:02:25.990766 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:25.990772 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.990777 | orchestrator | + protocol = "udp" 2026-04-09 00:02:25.990783 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990788 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:25.990794 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:25.990799 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:25.990805 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:25.990810 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.990816 | orchestrator | } 2026-04-09 00:02:25.990821 | orchestrator | 2026-04-09 00:02:25.990831 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-09 00:02:25.990837 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-09 00:02:25.990843 | orchestrator | + direction = "ingress" 2026-04-09 00:02:25.990851 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:25.990857 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.990862 | orchestrator | + protocol = "icmp" 2026-04-09 00:02:25.990868 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990873 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:25.990879 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:25.990884 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:25.990890 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:25.990895 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.990905 | orchestrator | } 2026-04-09 00:02:25.990910 | orchestrator | 2026-04-09 00:02:25.990916 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-09 00:02:25.990921 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-09 00:02:25.990927 | orchestrator | + description = "vrrp" 2026-04-09 00:02:25.990932 | orchestrator | + direction = "ingress" 2026-04-09 00:02:25.990938 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:25.990943 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.990949 | orchestrator | + protocol = "112" 2026-04-09 00:02:25.990954 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.990960 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:25.990965 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:25.990971 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:25.990976 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:25.990981 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.990987 | orchestrator | } 2026-04-09 00:02:25.990993 | orchestrator | 2026-04-09 00:02:25.990998 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-09 00:02:25.991004 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-09 00:02:25.991009 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.991015 | orchestrator | + description = "management security group" 2026-04-09 00:02:25.991020 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.991026 | orchestrator | + name = "testbed-management" 2026-04-09 00:02:25.991031 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.991037 | orchestrator | + stateful = (known after apply) 2026-04-09 00:02:25.991042 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.991047 | orchestrator | } 2026-04-09 00:02:25.991053 | orchestrator | 2026-04-09 00:02:25.991058 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-09 00:02:25.991064 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-09 00:02:25.991069 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.991075 | orchestrator | + description = "node security group" 2026-04-09 00:02:25.991080 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.991086 | orchestrator | + name = "testbed-node" 2026-04-09 00:02:25.991091 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.991096 | orchestrator | + stateful = (known after apply) 2026-04-09 00:02:25.991102 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.991107 | orchestrator | } 2026-04-09 00:02:25.991113 | orchestrator | 2026-04-09 00:02:25.991118 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-09 00:02:25.991124 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-09 00:02:25.991129 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:25.991135 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-09 00:02:25.991140 | orchestrator | + dns_nameservers = [ 2026-04-09 00:02:25.991146 | orchestrator | + "8.8.8.8", 2026-04-09 00:02:25.991151 | orchestrator | + "9.9.9.9", 2026-04-09 00:02:25.991157 | orchestrator | ] 2026-04-09 00:02:25.991162 | orchestrator | + enable_dhcp = true 2026-04-09 00:02:25.991168 | orchestrator | + gateway_ip = (known after apply) 2026-04-09 00:02:25.991173 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.991179 | orchestrator | + ip_version = 4 2026-04-09 00:02:25.991184 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-09 00:02:25.991190 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-09 00:02:25.991195 | orchestrator | + name = "subnet-testbed-management" 2026-04-09 00:02:25.991201 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:25.991206 | orchestrator | + no_gateway = false 2026-04-09 00:02:25.991212 | orchestrator | + region = (known after apply) 2026-04-09 00:02:25.991217 | orchestrator | + service_types = (known after apply) 2026-04-09 00:02:25.991226 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:25.991232 | orchestrator | 2026-04-09 00:02:25.991237 | orchestrator | + allocation_pool { 2026-04-09 00:02:25.991243 | orchestrator | + end = "192.168.31.250" 2026-04-09 00:02:25.991248 | orchestrator | + start = "192.168.31.200" 2026-04-09 00:02:25.991254 | orchestrator | } 2026-04-09 00:02:25.991259 | orchestrator | } 2026-04-09 00:02:25.991265 | orchestrator | 2026-04-09 00:02:25.991270 | orchestrator | # terraform_data.image will be created 2026-04-09 00:02:25.991276 | orchestrator | + resource "terraform_data" "image" { 2026-04-09 00:02:25.991281 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.991286 | orchestrator | + input = "Ubuntu 24.04" 2026-04-09 00:02:25.991292 | orchestrator | + output = (known after apply) 2026-04-09 00:02:25.991297 | orchestrator | } 2026-04-09 00:02:25.991303 | orchestrator | 2026-04-09 00:02:25.991308 | orchestrator | # terraform_data.image_node will be created 2026-04-09 00:02:25.991314 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-09 00:02:25.991319 | orchestrator | + id = (known after apply) 2026-04-09 00:02:25.991324 | orchestrator | + input = "Ubuntu 24.04" 2026-04-09 00:02:25.991330 | orchestrator | + output = (known after apply) 2026-04-09 00:02:25.991335 | orchestrator | } 2026-04-09 00:02:25.991341 | orchestrator | 2026-04-09 00:02:25.991346 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-09 00:02:25.991352 | orchestrator | 2026-04-09 00:02:25.991357 | orchestrator | Changes to Outputs: 2026-04-09 00:02:25.991363 | orchestrator | + manager_address = (sensitive value) 2026-04-09 00:02:25.991368 | orchestrator | + private_key = (sensitive value) 2026-04-09 00:02:26.202092 | orchestrator | terraform_data.image: Creating... 2026-04-09 00:02:26.202149 | orchestrator | terraform_data.image: Creation complete after 0s [id=0e7952fd-0bee-9ee8-caeb-c047d9ab5814] 2026-04-09 00:02:26.202159 | orchestrator | terraform_data.image_node: Creating... 2026-04-09 00:02:26.202168 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=ba378a96-8a4c-c161-2008-4ce11e1236d3] 2026-04-09 00:02:26.226077 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-09 00:02:26.231846 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-09 00:02:26.232005 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-09 00:02:26.252249 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-09 00:02:26.252895 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-09 00:02:26.253431 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-09 00:02:26.254598 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-09 00:02:26.254898 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-09 00:02:26.256247 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-09 00:02:26.270149 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-09 00:02:26.694107 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-09 00:02:26.697706 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-09 00:02:26.703160 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-09 00:02:26.708925 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-09 00:02:26.809297 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-04-09 00:02:26.813058 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-09 00:02:27.307246 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=7b3e750c-bef1-410a-b45d-cf78a5f5faa8] 2026-04-09 00:02:27.315680 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-09 00:02:29.866495 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47] 2026-04-09 00:02:29.870404 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-09 00:02:29.890980 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=bf0ee1e9-2919-47e1-8e63-acece2856b48] 2026-04-09 00:02:29.898105 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-09 00:02:29.906543 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=699b0239-fef5-4b39-83a4-6673e212f6a1] 2026-04-09 00:02:29.917653 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-09 00:02:29.940400 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=646eefac-58ec-4f92-9595-08f65c34439b] 2026-04-09 00:02:29.955506 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-09 00:02:29.963575 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=d010236c-a8cf-44aa-aea8-1599ad338c7a] 2026-04-09 00:02:29.985406 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=cb2ae45d-eb27-4723-ba8e-6f14f0885645] 2026-04-09 00:02:30.002585 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-09 00:02:30.006941 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3] 2026-04-09 00:02:30.022070 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-09 00:02:30.022124 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=6ddb4c8f-ad36-4043-a4c5-c841e18226a7] 2026-04-09 00:02:30.034238 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-09 00:02:30.034285 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-09 00:02:30.038584 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=9e756f65aa80490569304a8e47a5caf0f9d2e8df] 2026-04-09 00:02:30.040848 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=c2c77d89-653e-4715-a798-7d926e5d00ec] 2026-04-09 00:02:30.042542 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=0d4bf073a4ec827a8bbe7e7e825751591bf19098] 2026-04-09 00:02:30.047416 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-09 00:02:30.667912 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=750703c1-2485-4dbe-94f0-f3c1f99dc2e8] 2026-04-09 00:02:32.030057 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=4108768d-f91d-45a3-8666-65a4911f8a74] 2026-04-09 00:02:32.035444 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-09 00:02:33.213205 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=e2746891-05dc-4fd7-9896-5ab09f2729dc] 2026-04-09 00:02:33.272092 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=60227d82-9bd6-4e9b-88ba-e02146459042] 2026-04-09 00:02:33.286686 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf] 2026-04-09 00:02:33.346582 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=f1fbf9dd-48b0-4566-a31a-874418385eae] 2026-04-09 00:02:33.383791 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=de0fc72e-0758-4348-ae63-a1f95e9b2a28] 2026-04-09 00:02:33.400328 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=dec93e02-284b-4105-b505-be63281832aa] 2026-04-09 00:02:34.738693 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=0414041a-196d-475e-8db7-09284228655f] 2026-04-09 00:02:34.744105 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-09 00:02:34.744953 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-09 00:02:34.747817 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-09 00:02:35.000112 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=1b9cc942-0d66-4b02-85ac-11e3d2dcf4d5] 2026-04-09 00:02:35.023674 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-09 00:02:35.023848 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-09 00:02:35.029945 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-09 00:02:35.392769 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-09 00:02:35.392827 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-09 00:02:35.392836 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-09 00:02:35.392843 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-09 00:02:35.392850 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-09 00:02:35.392858 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4ceecde8-9bff-4de5-b502-5cf74dbd2068] 2026-04-09 00:02:35.392865 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-09 00:02:35.392872 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=35ad60e5-3807-486f-b272-92f5cdba403b] 2026-04-09 00:02:35.392881 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-09 00:02:35.420188 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=cad738db-e3ab-4ef4-93fa-6a6e4224935f] 2026-04-09 00:02:35.423434 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-09 00:02:35.597569 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=a84bf325-6417-4e5a-8f64-b7dec804ff9e] 2026-04-09 00:02:35.612985 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-09 00:02:35.671705 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=587f5326-0e00-41ec-8a66-d0690d7f8f92] 2026-04-09 00:02:35.674576 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-09 00:02:35.696194 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=7a05c10f-e8b0-4812-b2e1-1bc4a9c7d91c] 2026-04-09 00:02:35.700756 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-09 00:02:35.755684 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=a0f45adc-8215-4ed7-bcd7-7b5c0fca4ba2] 2026-04-09 00:02:35.758973 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-09 00:02:35.817140 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=2fcff95a-44e6-48d5-9425-3fae6ed32302] 2026-04-09 00:02:35.820675 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-09 00:02:35.916919 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=8bb704b8-c218-4a4a-a1d0-af07f4d0273b] 2026-04-09 00:02:36.075466 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=0bf06070-99f3-4b2a-b77b-b77cca9e0d07] 2026-04-09 00:02:36.202680 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=f038f879-455c-42af-9541-220167f06414] 2026-04-09 00:02:36.212603 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=81688c0c-ee14-4ab3-8ef7-a5f25c128408] 2026-04-09 00:02:36.298565 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=d70b676b-7c19-49ee-a7c3-eb343c865f8a] 2026-04-09 00:02:36.376911 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=78b2228e-127d-48f1-8a03-ec538d83d727] 2026-04-09 00:02:36.486600 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=c2212e14-e675-4457-871c-76e4cfd27a17] 2026-04-09 00:02:36.613973 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=12216599-f937-49ea-bdc8-04b1f3fff37a] 2026-04-09 00:02:36.795812 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=e2436842-9047-4c50-af1d-25bb54c14cb0] 2026-04-09 00:02:37.613335 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=2c78f64f-2452-4a00-8f19-e0af74cd5918] 2026-04-09 00:02:37.655401 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-09 00:02:37.661965 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-09 00:02:37.667598 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-09 00:02:37.678893 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-09 00:02:37.686071 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-09 00:02:37.704487 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-09 00:02:37.705845 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-09 00:02:39.609499 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=6893ef95-3382-45f4-aafb-0f68efd9deff] 2026-04-09 00:02:39.614389 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-09 00:02:39.622449 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-09 00:02:39.623085 | orchestrator | local_file.inventory: Creating... 2026-04-09 00:02:39.628073 | orchestrator | local_file.inventory: Creation complete after 0s [id=6eba1514357672ec0dfcc4cbfd9f48a954565a5c] 2026-04-09 00:02:39.630097 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=d4e7a1fb082371060750bbe98a2c4981c31db8ab] 2026-04-09 00:02:40.633232 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=6893ef95-3382-45f4-aafb-0f68efd9deff] 2026-04-09 00:02:47.667516 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-09 00:02:47.671809 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-09 00:02:47.695280 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-09 00:02:47.698418 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-09 00:02:47.705772 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-09 00:02:47.706926 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-09 00:02:57.676051 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-09 00:02:57.676173 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-09 00:02:57.696455 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-09 00:02:57.698691 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-09 00:02:57.705941 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-09 00:02:57.707199 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-09 00:02:58.263946 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=24d89df0-ef48-4470-a935-fcc9998c523f] 2026-04-09 00:03:07.684661 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-09 00:03:07.684815 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-09 00:03:07.697141 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-09 00:03:07.699603 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-09 00:03:07.707145 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-09 00:03:08.373677 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=d408348d-ae10-43e7-8c17-b2485bd284d4] 2026-04-09 00:03:08.540361 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=c9f3c2d7-3eb8-4ff3-96ed-1e92e0b6d6b9] 2026-04-09 00:03:08.703686 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=59406c4f-7179-43f0-89b6-deb9ac53e375] 2026-04-09 00:03:08.926417 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=a4345385-cb81-4fd5-a2e4-89685e5883aa] 2026-04-09 00:03:09.058855 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=5bd8c72a-279a-43eb-a697-05101586d846] 2026-04-09 00:03:09.074275 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-09 00:03:09.080966 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-09 00:03:09.082774 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-09 00:03:09.084929 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8976028649646179038] 2026-04-09 00:03:09.085738 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-09 00:03:09.086176 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-09 00:03:09.086455 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-09 00:03:09.111594 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-09 00:03:09.112763 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-09 00:03:09.114338 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-09 00:03:09.116537 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-09 00:03:09.119546 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-09 00:03:12.598597 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=a4345385-cb81-4fd5-a2e4-89685e5883aa/c2c77d89-653e-4715-a798-7d926e5d00ec] 2026-04-09 00:03:12.603005 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=59406c4f-7179-43f0-89b6-deb9ac53e375/d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47] 2026-04-09 00:03:12.629286 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=24d89df0-ef48-4470-a935-fcc9998c523f/699b0239-fef5-4b39-83a4-6673e212f6a1] 2026-04-09 00:03:18.713923 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=59406c4f-7179-43f0-89b6-deb9ac53e375/cb2ae45d-eb27-4723-ba8e-6f14f0885645] 2026-04-09 00:03:18.748481 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=24d89df0-ef48-4470-a935-fcc9998c523f/d010236c-a8cf-44aa-aea8-1599ad338c7a] 2026-04-09 00:03:18.750419 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=a4345385-cb81-4fd5-a2e4-89685e5883aa/bf0ee1e9-2919-47e1-8e63-acece2856b48] 2026-04-09 00:03:18.867706 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=59406c4f-7179-43f0-89b6-deb9ac53e375/5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3] 2026-04-09 00:03:18.878504 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=a4345385-cb81-4fd5-a2e4-89685e5883aa/646eefac-58ec-4f92-9595-08f65c34439b] 2026-04-09 00:03:18.894106 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=24d89df0-ef48-4470-a935-fcc9998c523f/6ddb4c8f-ad36-4043-a4c5-c841e18226a7] 2026-04-09 00:03:19.123188 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-09 00:03:29.124336 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-09 00:03:29.591461 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=1f707a18-f17c-44d2-afdc-f70dcad5f1c3] 2026-04-09 00:03:29.605636 | orchestrator | 2026-04-09 00:03:29.605730 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-09 00:03:29.605741 | orchestrator | 2026-04-09 00:03:29.605765 | orchestrator | Outputs: 2026-04-09 00:03:29.605773 | orchestrator | 2026-04-09 00:03:29.605780 | orchestrator | manager_address = 2026-04-09 00:03:29.605788 | orchestrator | private_key = 2026-04-09 00:03:29.681007 | orchestrator | ok: Runtime: 0:01:09.412892 2026-04-09 00:03:29.704212 | 2026-04-09 00:03:29.704334 | TASK [Create infrastructure (stable)] 2026-04-09 00:03:30.239251 | orchestrator | skipping: Conditional result was False 2026-04-09 00:03:30.248714 | 2026-04-09 00:03:30.248855 | TASK [Fetch manager address] 2026-04-09 00:03:30.732755 | orchestrator | ok 2026-04-09 00:03:30.745351 | 2026-04-09 00:03:30.745474 | TASK [Set manager_host address] 2026-04-09 00:03:30.848141 | orchestrator | ok 2026-04-09 00:03:30.855411 | 2026-04-09 00:03:30.855522 | LOOP [Update ansible collections] 2026-04-09 00:03:31.842182 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:03:31.842465 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-09 00:03:31.842504 | orchestrator | Starting galaxy collection install process 2026-04-09 00:03:31.842529 | orchestrator | Process install dependency map 2026-04-09 00:03:31.842551 | orchestrator | Starting collection install process 2026-04-09 00:03:31.842583 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-04-09 00:03:31.842610 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-04-09 00:03:31.842642 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-09 00:03:31.842722 | orchestrator | ok: Item: commons Runtime: 0:00:00.657720 2026-04-09 00:03:32.886770 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-09 00:03:32.887028 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:03:32.887089 | orchestrator | Starting galaxy collection install process 2026-04-09 00:03:32.887131 | orchestrator | Process install dependency map 2026-04-09 00:03:32.887170 | orchestrator | Starting collection install process 2026-04-09 00:03:32.887204 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-04-09 00:03:32.887235 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-04-09 00:03:32.887268 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-09 00:03:32.887321 | orchestrator | ok: Item: services Runtime: 0:00:00.703495 2026-04-09 00:03:32.909643 | 2026-04-09 00:03:32.909801 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-09 00:03:43.518685 | orchestrator | ok 2026-04-09 00:03:43.530793 | 2026-04-09 00:03:43.530965 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-09 00:04:43.569264 | orchestrator | ok 2026-04-09 00:04:43.577855 | 2026-04-09 00:04:43.578003 | TASK [Fetch manager ssh hostkey] 2026-04-09 00:04:45.154818 | orchestrator | Output suppressed because no_log was given 2026-04-09 00:04:45.171561 | 2026-04-09 00:04:45.171758 | TASK [Get ssh keypair from terraform environment] 2026-04-09 00:04:45.713542 | orchestrator | ok: Runtime: 0:00:00.006166 2026-04-09 00:04:45.734464 | 2026-04-09 00:04:45.734682 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-09 00:04:45.777636 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-09 00:04:45.788131 | 2026-04-09 00:04:45.788263 | TASK [Run manager part 0] 2026-04-09 00:04:46.667519 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:04:46.710054 | orchestrator | 2026-04-09 00:04:46.710097 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-09 00:04:46.710109 | orchestrator | 2026-04-09 00:04:46.710127 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-09 00:04:48.552515 | orchestrator | ok: [testbed-manager] 2026-04-09 00:04:48.552578 | orchestrator | 2026-04-09 00:04:48.552628 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-09 00:04:48.552641 | orchestrator | 2026-04-09 00:04:48.552654 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:04:50.484252 | orchestrator | ok: [testbed-manager] 2026-04-09 00:04:50.484299 | orchestrator | 2026-04-09 00:04:50.484305 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-09 00:04:51.073866 | orchestrator | ok: [testbed-manager] 2026-04-09 00:04:51.074454 | orchestrator | 2026-04-09 00:04:51.074473 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-09 00:04:51.113132 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:04:51.113183 | orchestrator | 2026-04-09 00:04:51.113196 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-09 00:04:51.140514 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:04:51.140559 | orchestrator | 2026-04-09 00:04:51.140566 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-09 00:04:51.167555 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:04:51.167602 | orchestrator | 2026-04-09 00:04:51.167610 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-09 00:04:51.847459 | orchestrator | changed: [testbed-manager] 2026-04-09 00:04:51.847501 | orchestrator | 2026-04-09 00:04:51.847508 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-09 00:07:40.378124 | orchestrator | changed: [testbed-manager] 2026-04-09 00:07:40.378289 | orchestrator | 2026-04-09 00:07:40.378311 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-09 00:09:10.703821 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:10.703895 | orchestrator | 2026-04-09 00:09:10.703915 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-09 00:09:33.457066 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:33.457189 | orchestrator | 2026-04-09 00:09:33.457219 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-09 00:09:42.199938 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:42.200053 | orchestrator | 2026-04-09 00:09:42.200083 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-09 00:09:42.240769 | orchestrator | ok: [testbed-manager] 2026-04-09 00:09:42.240840 | orchestrator | 2026-04-09 00:09:42.240854 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-09 00:09:43.024143 | orchestrator | ok: [testbed-manager] 2026-04-09 00:09:43.024193 | orchestrator | 2026-04-09 00:09:43.024200 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-09 00:09:43.759319 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:43.759355 | orchestrator | 2026-04-09 00:09:43.759363 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-09 00:09:50.025535 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:50.025622 | orchestrator | 2026-04-09 00:09:50.025646 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-09 00:09:55.911014 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:55.911112 | orchestrator | 2026-04-09 00:09:55.911128 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-09 00:09:58.539199 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:58.539271 | orchestrator | 2026-04-09 00:09:58.539285 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-09 00:10:00.242205 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:00.242863 | orchestrator | 2026-04-09 00:10:00.242890 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-09 00:10:01.366802 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-09 00:10:01.366961 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-09 00:10:01.366979 | orchestrator | 2026-04-09 00:10:01.366995 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-09 00:10:01.410601 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-09 00:10:01.410653 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-09 00:10:01.410660 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-09 00:10:01.410666 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-09 00:10:08.517667 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-09 00:10:08.517736 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-09 00:10:08.517750 | orchestrator | 2026-04-09 00:10:08.517762 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-09 00:10:09.059079 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:09.059150 | orchestrator | 2026-04-09 00:10:09.059166 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-09 00:13:31.840342 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-09 00:13:31.840491 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-09 00:13:31.840510 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-09 00:13:31.840522 | orchestrator | 2026-04-09 00:13:31.840534 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-09 00:13:34.143811 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-09 00:13:34.143908 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-09 00:13:34.143927 | orchestrator | 2026-04-09 00:13:34.143950 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-09 00:13:34.143971 | orchestrator | 2026-04-09 00:13:34.143993 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:13:35.519244 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:35.519279 | orchestrator | 2026-04-09 00:13:35.519285 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-09 00:13:35.556055 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:35.556088 | orchestrator | 2026-04-09 00:13:35.556094 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-09 00:13:35.613244 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:35.613285 | orchestrator | 2026-04-09 00:13:35.613292 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-09 00:13:36.368271 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:36.368333 | orchestrator | 2026-04-09 00:13:36.368342 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-09 00:13:37.059354 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:37.059487 | orchestrator | 2026-04-09 00:13:37.059503 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-09 00:13:38.395957 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-09 00:13:38.395995 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-09 00:13:38.396002 | orchestrator | 2026-04-09 00:13:38.396009 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-09 00:13:39.758821 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:39.758861 | orchestrator | 2026-04-09 00:13:39.758868 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-09 00:13:41.380671 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:13:41.380707 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-09 00:13:41.380719 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:13:41.380723 | orchestrator | 2026-04-09 00:13:41.380729 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-09 00:13:41.425597 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:41.425632 | orchestrator | 2026-04-09 00:13:41.425638 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-09 00:13:41.499004 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:41.499079 | orchestrator | 2026-04-09 00:13:41.499092 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-09 00:13:42.042864 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:42.042921 | orchestrator | 2026-04-09 00:13:42.042927 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-09 00:13:42.103689 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:42.103780 | orchestrator | 2026-04-09 00:13:42.103798 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-09 00:13:42.929632 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:13:42.929689 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:42.929695 | orchestrator | 2026-04-09 00:13:42.929700 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-09 00:13:42.966539 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:42.966642 | orchestrator | 2026-04-09 00:13:42.966659 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-09 00:13:43.001620 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:43.001677 | orchestrator | 2026-04-09 00:13:43.001686 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-09 00:13:43.036780 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:43.036838 | orchestrator | 2026-04-09 00:13:43.036846 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-09 00:13:43.110241 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:43.110292 | orchestrator | 2026-04-09 00:13:43.110312 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-09 00:13:43.833882 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:43.833920 | orchestrator | 2026-04-09 00:13:43.833928 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-09 00:13:43.833935 | orchestrator | 2026-04-09 00:13:43.833942 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:13:45.219894 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:45.219988 | orchestrator | 2026-04-09 00:13:45.220006 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-09 00:13:46.193177 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:46.193213 | orchestrator | 2026-04-09 00:13:46.193219 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:13:46.193225 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-09 00:13:46.193229 | orchestrator | 2026-04-09 00:13:46.625826 | orchestrator | ok: Runtime: 0:09:00.223329 2026-04-09 00:13:46.639760 | 2026-04-09 00:13:46.639927 | TASK [Point out that the log in on the manager is now possible] 2026-04-09 00:13:46.672298 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-09 00:13:46.680434 | 2026-04-09 00:13:46.680564 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-09 00:13:46.742056 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-09 00:13:46.763349 | 2026-04-09 00:13:46.763498 | TASK [Run manager part 1 + 2] 2026-04-09 00:13:48.639798 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:13:48.701509 | orchestrator | 2026-04-09 00:13:48.701598 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-09 00:13:48.701617 | orchestrator | 2026-04-09 00:13:48.701646 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:13:51.576516 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:51.576567 | orchestrator | 2026-04-09 00:13:51.576590 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-09 00:13:51.616480 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:51.616545 | orchestrator | 2026-04-09 00:13:51.616556 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-09 00:13:51.656310 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:51.656360 | orchestrator | 2026-04-09 00:13:51.656369 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 00:13:51.705511 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:51.705555 | orchestrator | 2026-04-09 00:13:51.705563 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 00:13:51.770555 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:51.770597 | orchestrator | 2026-04-09 00:13:51.770604 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 00:13:51.825158 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:51.825206 | orchestrator | 2026-04-09 00:13:51.825216 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 00:13:51.862983 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-09 00:13:51.863039 | orchestrator | 2026-04-09 00:13:51.863049 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 00:13:52.546949 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:52.547040 | orchestrator | 2026-04-09 00:13:52.547060 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 00:13:52.596535 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:13:52.596606 | orchestrator | 2026-04-09 00:13:52.596619 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 00:13:53.961682 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:53.961758 | orchestrator | 2026-04-09 00:13:53.961774 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 00:13:54.519371 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:54.519435 | orchestrator | 2026-04-09 00:13:54.519445 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 00:13:55.671720 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:55.671776 | orchestrator | 2026-04-09 00:13:55.671783 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 00:14:11.305856 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:11.305979 | orchestrator | 2026-04-09 00:14:11.306007 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-09 00:14:12.575092 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:12.575133 | orchestrator | 2026-04-09 00:14:12.575141 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-09 00:14:12.631795 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:14:12.631886 | orchestrator | 2026-04-09 00:14:12.631904 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-09 00:14:13.524025 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:13.524176 | orchestrator | 2026-04-09 00:14:13.524191 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-09 00:14:14.436074 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:14.436115 | orchestrator | 2026-04-09 00:14:14.436124 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-09 00:14:14.998360 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:14.998447 | orchestrator | 2026-04-09 00:14:14.998462 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-09 00:14:15.042599 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-09 00:14:15.042664 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-09 00:14:15.042670 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-09 00:14:15.042675 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-09 00:14:17.216402 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:17.216489 | orchestrator | 2026-04-09 00:14:17.216502 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-09 00:14:25.736590 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-09 00:14:25.736631 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-09 00:14:25.736638 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-09 00:14:25.736644 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-09 00:14:25.736653 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-09 00:14:25.736658 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-09 00:14:25.736663 | orchestrator | 2026-04-09 00:14:25.736669 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-09 00:14:26.695781 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:26.695816 | orchestrator | 2026-04-09 00:14:26.695822 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-09 00:14:29.716607 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:29.716652 | orchestrator | 2026-04-09 00:14:29.716661 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-09 00:14:29.760958 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:14:29.760993 | orchestrator | 2026-04-09 00:14:29.761001 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-09 00:16:06.857546 | orchestrator | changed: [testbed-manager] 2026-04-09 00:16:06.857642 | orchestrator | 2026-04-09 00:16:06.857658 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 00:16:08.044084 | orchestrator | ok: [testbed-manager] 2026-04-09 00:16:08.044202 | orchestrator | 2026-04-09 00:16:08.044220 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:16:08.044234 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-09 00:16:08.044246 | orchestrator | 2026-04-09 00:16:08.395545 | orchestrator | ok: Runtime: 0:02:21.062973 2026-04-09 00:16:08.413193 | 2026-04-09 00:16:08.413372 | TASK [Reboot manager] 2026-04-09 00:16:09.965610 | orchestrator | ok: Runtime: 0:00:00.956402 2026-04-09 00:16:09.974670 | 2026-04-09 00:16:09.974820 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-09 00:16:26.384770 | orchestrator | ok 2026-04-09 00:16:26.396168 | 2026-04-09 00:16:26.396307 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-09 00:17:26.441959 | orchestrator | ok 2026-04-09 00:17:26.453529 | 2026-04-09 00:17:26.453677 | TASK [Deploy manager + bootstrap nodes] 2026-04-09 00:17:28.943608 | orchestrator | 2026-04-09 00:17:28.943805 | orchestrator | # DEPLOY MANAGER 2026-04-09 00:17:28.943830 | orchestrator | 2026-04-09 00:17:28.943845 | orchestrator | + set -e 2026-04-09 00:17:28.943858 | orchestrator | + echo 2026-04-09 00:17:28.943872 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-09 00:17:28.943889 | orchestrator | + echo 2026-04-09 00:17:28.943939 | orchestrator | + cat /opt/manager-vars.sh 2026-04-09 00:17:28.946984 | orchestrator | export NUMBER_OF_NODES=6 2026-04-09 00:17:28.947011 | orchestrator | 2026-04-09 00:17:28.947023 | orchestrator | export CEPH_VERSION=reef 2026-04-09 00:17:28.947036 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-09 00:17:28.947048 | orchestrator | export MANAGER_VERSION=latest 2026-04-09 00:17:28.947070 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-09 00:17:28.947081 | orchestrator | 2026-04-09 00:17:28.947099 | orchestrator | export ARA=false 2026-04-09 00:17:28.947110 | orchestrator | export DEPLOY_MODE=manager 2026-04-09 00:17:28.947163 | orchestrator | export TEMPEST=true 2026-04-09 00:17:28.947175 | orchestrator | export IS_ZUUL=true 2026-04-09 00:17:28.947186 | orchestrator | 2026-04-09 00:17:28.947204 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 00:17:28.947216 | orchestrator | export EXTERNAL_API=false 2026-04-09 00:17:28.947226 | orchestrator | 2026-04-09 00:17:28.947237 | orchestrator | export IMAGE_USER=ubuntu 2026-04-09 00:17:28.947251 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-09 00:17:28.947262 | orchestrator | 2026-04-09 00:17:28.947273 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-09 00:17:28.947290 | orchestrator | 2026-04-09 00:17:28.947301 | orchestrator | + echo 2026-04-09 00:17:28.947318 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 00:17:28.948240 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 00:17:28.948257 | orchestrator | ++ INTERACTIVE=false 2026-04-09 00:17:28.948271 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 00:17:28.948283 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 00:17:28.948481 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 00:17:28.948496 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 00:17:28.948507 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 00:17:28.948517 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 00:17:28.948672 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 00:17:28.948688 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 00:17:28.948699 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 00:17:28.948710 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 00:17:28.948721 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 00:17:28.948731 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 00:17:28.948750 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 00:17:28.948761 | orchestrator | ++ export ARA=false 2026-04-09 00:17:28.948772 | orchestrator | ++ ARA=false 2026-04-09 00:17:28.948783 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 00:17:28.948794 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 00:17:28.948804 | orchestrator | ++ export TEMPEST=true 2026-04-09 00:17:28.948815 | orchestrator | ++ TEMPEST=true 2026-04-09 00:17:28.948826 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 00:17:28.948841 | orchestrator | ++ IS_ZUUL=true 2026-04-09 00:17:28.948852 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 00:17:28.948863 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 00:17:28.948874 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 00:17:28.948885 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 00:17:28.948895 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 00:17:28.948906 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 00:17:28.948917 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 00:17:28.948927 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 00:17:28.948938 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 00:17:28.948949 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 00:17:28.948964 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-09 00:17:29.005690 | orchestrator | + docker version 2026-04-09 00:17:29.127109 | orchestrator | Client: Docker Engine - Community 2026-04-09 00:17:29.127267 | orchestrator | Version: 27.5.1 2026-04-09 00:17:29.127283 | orchestrator | API version: 1.47 2026-04-09 00:17:29.127297 | orchestrator | Go version: go1.22.11 2026-04-09 00:17:29.127307 | orchestrator | Git commit: 9f9e405 2026-04-09 00:17:29.127319 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-09 00:17:29.127330 | orchestrator | OS/Arch: linux/amd64 2026-04-09 00:17:29.127341 | orchestrator | Context: default 2026-04-09 00:17:29.127352 | orchestrator | 2026-04-09 00:17:29.127363 | orchestrator | Server: Docker Engine - Community 2026-04-09 00:17:29.127374 | orchestrator | Engine: 2026-04-09 00:17:29.127384 | orchestrator | Version: 27.5.1 2026-04-09 00:17:29.127396 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-09 00:17:29.127436 | orchestrator | Go version: go1.22.11 2026-04-09 00:17:29.127447 | orchestrator | Git commit: 4c9b3b0 2026-04-09 00:17:29.127458 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-09 00:17:29.127469 | orchestrator | OS/Arch: linux/amd64 2026-04-09 00:17:29.127479 | orchestrator | Experimental: false 2026-04-09 00:17:29.127490 | orchestrator | containerd: 2026-04-09 00:17:29.127514 | orchestrator | Version: v2.2.2 2026-04-09 00:17:29.127526 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-09 00:17:29.127537 | orchestrator | runc: 2026-04-09 00:17:29.127548 | orchestrator | Version: 1.3.4 2026-04-09 00:17:29.127558 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-09 00:17:29.127569 | orchestrator | docker-init: 2026-04-09 00:17:29.127580 | orchestrator | Version: 0.19.0 2026-04-09 00:17:29.127591 | orchestrator | GitCommit: de40ad0 2026-04-09 00:17:29.130870 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-09 00:17:29.141469 | orchestrator | + set -e 2026-04-09 00:17:29.141519 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 00:17:29.141531 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 00:17:29.141544 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 00:17:29.141554 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 00:17:29.141565 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 00:17:29.141576 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 00:17:29.141588 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 00:17:29.141598 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 00:17:29.141609 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 00:17:29.141620 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 00:17:29.141630 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 00:17:29.141641 | orchestrator | ++ export ARA=false 2026-04-09 00:17:29.141651 | orchestrator | ++ ARA=false 2026-04-09 00:17:29.141662 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 00:17:29.141672 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 00:17:29.141683 | orchestrator | ++ export TEMPEST=true 2026-04-09 00:17:29.141693 | orchestrator | ++ TEMPEST=true 2026-04-09 00:17:29.141704 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 00:17:29.141714 | orchestrator | ++ IS_ZUUL=true 2026-04-09 00:17:29.141725 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 00:17:29.141736 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 00:17:29.141747 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 00:17:29.141757 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 00:17:29.141768 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 00:17:29.141779 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 00:17:29.141789 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 00:17:29.141800 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 00:17:29.141810 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 00:17:29.141821 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 00:17:29.141832 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 00:17:29.141842 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 00:17:29.141853 | orchestrator | ++ INTERACTIVE=false 2026-04-09 00:17:29.141863 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 00:17:29.141879 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 00:17:29.141898 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 00:17:29.141909 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 00:17:29.141920 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-09 00:17:29.148822 | orchestrator | + set -e 2026-04-09 00:17:29.148846 | orchestrator | + VERSION=reef 2026-04-09 00:17:29.149917 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:17:29.155957 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-09 00:17:29.155999 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:17:29.161360 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-04-09 00:17:29.168162 | orchestrator | + set -e 2026-04-09 00:17:29.168232 | orchestrator | + VERSION=2024.2 2026-04-09 00:17:29.169251 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:17:29.172528 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-09 00:17:29.172569 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:17:29.178263 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-09 00:17:29.178972 | orchestrator | ++ semver latest 7.0.0 2026-04-09 00:17:29.244730 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:17:29.244858 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 00:17:29.244880 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-09 00:17:29.245110 | orchestrator | ++ semver latest 10.0.0-0 2026-04-09 00:17:29.305842 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:17:29.306379 | orchestrator | ++ semver 2024.2 2025.1 2026-04-09 00:17:29.360834 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:17:29.360940 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-09 00:17:29.439954 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 00:17:29.440971 | orchestrator | + source /opt/venv/bin/activate 2026-04-09 00:17:29.442327 | orchestrator | ++ deactivate nondestructive 2026-04-09 00:17:29.442379 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:17:29.442399 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:17:29.442412 | orchestrator | ++ hash -r 2026-04-09 00:17:29.442423 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:17:29.442433 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-09 00:17:29.442444 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-09 00:17:29.442458 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-09 00:17:29.442471 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-09 00:17:29.442481 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-09 00:17:29.442492 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-09 00:17:29.442503 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-09 00:17:29.442515 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:17:29.442526 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:17:29.442537 | orchestrator | ++ export PATH 2026-04-09 00:17:29.442548 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:17:29.442566 | orchestrator | ++ '[' -z '' ']' 2026-04-09 00:17:29.442577 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-09 00:17:29.442588 | orchestrator | ++ PS1='(venv) ' 2026-04-09 00:17:29.442598 | orchestrator | ++ export PS1 2026-04-09 00:17:29.442609 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-09 00:17:29.442620 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-09 00:17:29.442632 | orchestrator | ++ hash -r 2026-04-09 00:17:29.442666 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-09 00:17:32.143297 | orchestrator | 2026-04-09 00:17:32.143418 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-09 00:17:32.143435 | orchestrator | 2026-04-09 00:17:32.143448 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 00:17:32.736794 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:32.736904 | orchestrator | 2026-04-09 00:17:32.736920 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-09 00:17:33.604507 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:33.604614 | orchestrator | 2026-04-09 00:17:33.604631 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-09 00:17:33.604644 | orchestrator | 2026-04-09 00:17:33.604656 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:17:35.778387 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:35.778498 | orchestrator | 2026-04-09 00:17:35.778516 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-09 00:17:35.829632 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:35.829691 | orchestrator | 2026-04-09 00:17:35.829699 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-09 00:17:36.255086 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:36.255212 | orchestrator | 2026-04-09 00:17:36.255228 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-09 00:17:36.296675 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:17:36.296764 | orchestrator | 2026-04-09 00:17:36.296778 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-09 00:17:36.609086 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:36.609222 | orchestrator | 2026-04-09 00:17:36.609239 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-09 00:17:36.898058 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:36.898256 | orchestrator | 2026-04-09 00:17:36.898272 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-09 00:17:36.998587 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:17:36.998674 | orchestrator | 2026-04-09 00:17:36.998689 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-09 00:17:36.998702 | orchestrator | 2026-04-09 00:17:36.998713 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:17:38.560635 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:38.560738 | orchestrator | 2026-04-09 00:17:38.560754 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-09 00:17:39.919766 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-09 00:17:39.919892 | orchestrator | 2026-04-09 00:17:39.919919 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-09 00:17:39.966604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-09 00:17:39.966723 | orchestrator | 2026-04-09 00:17:39.966748 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-09 00:17:41.096276 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-09 00:17:41.096367 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-09 00:17:41.096378 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-09 00:17:41.096388 | orchestrator | 2026-04-09 00:17:41.096397 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-09 00:17:42.830652 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-09 00:17:42.830767 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-09 00:17:42.830786 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-09 00:17:42.830799 | orchestrator | 2026-04-09 00:17:42.830811 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-09 00:17:43.393538 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:17:43.393641 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:43.393655 | orchestrator | 2026-04-09 00:17:43.393668 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-09 00:17:43.959730 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:17:43.959838 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:43.959856 | orchestrator | 2026-04-09 00:17:43.959868 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-09 00:17:44.005751 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:17:44.005878 | orchestrator | 2026-04-09 00:17:44.005903 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-09 00:17:44.324522 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:44.324624 | orchestrator | 2026-04-09 00:17:44.324640 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-09 00:17:44.408021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-09 00:17:44.408151 | orchestrator | 2026-04-09 00:17:44.408168 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-09 00:17:45.911504 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:45.911565 | orchestrator | 2026-04-09 00:17:45.911578 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-09 00:17:46.644232 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:46.644334 | orchestrator | 2026-04-09 00:17:46.644354 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-09 00:17:57.110404 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:57.110517 | orchestrator | 2026-04-09 00:17:57.110557 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-09 00:17:57.151618 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:17:57.151701 | orchestrator | 2026-04-09 00:17:57.151713 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-09 00:17:57.151722 | orchestrator | 2026-04-09 00:17:57.151731 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:17:58.741851 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:58.741952 | orchestrator | 2026-04-09 00:17:58.741997 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-09 00:17:58.839942 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-09 00:17:58.840043 | orchestrator | 2026-04-09 00:17:58.840060 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-09 00:17:58.895420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:17:58.895514 | orchestrator | 2026-04-09 00:17:58.895529 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-09 00:18:01.067223 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:01.067322 | orchestrator | 2026-04-09 00:18:01.067337 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-09 00:18:01.121311 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:01.121400 | orchestrator | 2026-04-09 00:18:01.121414 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-09 00:18:01.235861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-09 00:18:01.235972 | orchestrator | 2026-04-09 00:18:01.235990 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-09 00:18:03.770692 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-09 00:18:03.770803 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-09 00:18:03.770818 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-09 00:18:03.770830 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-09 00:18:03.770841 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-09 00:18:03.770852 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-09 00:18:03.770863 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-09 00:18:03.770874 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-09 00:18:03.770885 | orchestrator | 2026-04-09 00:18:03.770897 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-09 00:18:04.329444 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:04.329547 | orchestrator | 2026-04-09 00:18:04.329563 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-09 00:18:04.894682 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:04.894762 | orchestrator | 2026-04-09 00:18:04.894772 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-09 00:18:04.961211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-09 00:18:04.961341 | orchestrator | 2026-04-09 00:18:04.961367 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-09 00:18:06.058276 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-09 00:18:06.058351 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-09 00:18:06.058358 | orchestrator | 2026-04-09 00:18:06.058364 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-09 00:18:06.625503 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:06.625601 | orchestrator | 2026-04-09 00:18:06.625619 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-09 00:18:06.682220 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:18:06.682302 | orchestrator | 2026-04-09 00:18:06.682312 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-09 00:18:06.749547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-09 00:18:06.749666 | orchestrator | 2026-04-09 00:18:06.749682 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-09 00:18:07.295263 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:07.295390 | orchestrator | 2026-04-09 00:18:07.295415 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-09 00:18:07.353449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-09 00:18:07.353582 | orchestrator | 2026-04-09 00:18:07.353599 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-09 00:18:08.560678 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:18:08.560780 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:18:08.560796 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:08.560809 | orchestrator | 2026-04-09 00:18:08.560821 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-09 00:18:09.098687 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:09.098814 | orchestrator | 2026-04-09 00:18:09.098840 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-09 00:18:09.145409 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:18:09.145497 | orchestrator | 2026-04-09 00:18:09.145510 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-09 00:18:09.230071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-09 00:18:09.230209 | orchestrator | 2026-04-09 00:18:09.230227 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-09 00:18:09.678423 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:09.678527 | orchestrator | 2026-04-09 00:18:09.678565 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-09 00:18:10.049277 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:10.049380 | orchestrator | 2026-04-09 00:18:10.049395 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-09 00:18:11.118534 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-09 00:18:11.118625 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-09 00:18:11.118636 | orchestrator | 2026-04-09 00:18:11.118646 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-09 00:18:11.735909 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:11.736013 | orchestrator | 2026-04-09 00:18:11.736030 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-09 00:18:12.072762 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:12.072862 | orchestrator | 2026-04-09 00:18:12.072878 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-09 00:18:12.388263 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:12.388360 | orchestrator | 2026-04-09 00:18:12.388376 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-09 00:18:12.425257 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:18:12.425386 | orchestrator | 2026-04-09 00:18:12.425404 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-09 00:18:12.493754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-09 00:18:12.493846 | orchestrator | 2026-04-09 00:18:12.493859 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-09 00:18:12.522321 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:12.522404 | orchestrator | 2026-04-09 00:18:12.522415 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-09 00:18:14.314940 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-09 00:18:14.315029 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-09 00:18:14.315041 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-09 00:18:14.315050 | orchestrator | 2026-04-09 00:18:14.315059 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-09 00:18:14.922684 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:14.922788 | orchestrator | 2026-04-09 00:18:14.922804 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-09 00:18:15.541245 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:15.541362 | orchestrator | 2026-04-09 00:18:15.541387 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-09 00:18:16.187482 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:16.187584 | orchestrator | 2026-04-09 00:18:16.187603 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-09 00:18:16.246830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-09 00:18:16.246931 | orchestrator | 2026-04-09 00:18:16.246947 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-09 00:18:16.281566 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:16.281668 | orchestrator | 2026-04-09 00:18:16.281685 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-09 00:18:16.901708 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-09 00:18:16.901806 | orchestrator | 2026-04-09 00:18:16.901821 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-09 00:18:16.976970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-09 00:18:16.977067 | orchestrator | 2026-04-09 00:18:16.977083 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-09 00:18:17.666715 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:17.666819 | orchestrator | 2026-04-09 00:18:17.666836 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-09 00:18:18.284681 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:18.284780 | orchestrator | 2026-04-09 00:18:18.284796 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-09 00:18:18.342606 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:18:18.342693 | orchestrator | 2026-04-09 00:18:18.342707 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-09 00:18:18.403962 | orchestrator | ok: [testbed-manager] 2026-04-09 00:18:18.404055 | orchestrator | 2026-04-09 00:18:18.404070 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-09 00:18:19.227607 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:19.227713 | orchestrator | 2026-04-09 00:18:19.227731 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-09 00:19:24.627825 | orchestrator | changed: [testbed-manager] 2026-04-09 00:19:24.627946 | orchestrator | 2026-04-09 00:19:24.627963 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-09 00:19:25.482575 | orchestrator | ok: [testbed-manager] 2026-04-09 00:19:25.482677 | orchestrator | 2026-04-09 00:19:25.482693 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-09 00:19:25.534600 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:19:25.534686 | orchestrator | 2026-04-09 00:19:25.534698 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-09 00:19:28.016960 | orchestrator | changed: [testbed-manager] 2026-04-09 00:19:28.017065 | orchestrator | 2026-04-09 00:19:28.017141 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-09 00:19:28.089556 | orchestrator | ok: [testbed-manager] 2026-04-09 00:19:28.089652 | orchestrator | 2026-04-09 00:19:28.089690 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-09 00:19:28.089704 | orchestrator | 2026-04-09 00:19:28.089716 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-09 00:19:28.136360 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:19:28.136454 | orchestrator | 2026-04-09 00:19:28.136469 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-09 00:20:28.185668 | orchestrator | Pausing for 60 seconds 2026-04-09 00:20:28.185779 | orchestrator | changed: [testbed-manager] 2026-04-09 00:20:28.185795 | orchestrator | 2026-04-09 00:20:28.185809 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-09 00:20:31.164642 | orchestrator | changed: [testbed-manager] 2026-04-09 00:20:31.164755 | orchestrator | 2026-04-09 00:20:31.164773 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-09 00:21:12.570766 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-09 00:21:12.570843 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-09 00:21:12.570851 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:12.570880 | orchestrator | 2026-04-09 00:21:12.570887 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-09 00:21:18.176988 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:18.177136 | orchestrator | 2026-04-09 00:21:18.177153 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-09 00:21:18.258106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-09 00:21:18.258179 | orchestrator | 2026-04-09 00:21:18.258189 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-09 00:21:18.258197 | orchestrator | 2026-04-09 00:21:18.258204 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-09 00:21:18.303352 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:18.303437 | orchestrator | 2026-04-09 00:21:18.303448 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-09 00:21:18.367795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-09 00:21:18.367882 | orchestrator | 2026-04-09 00:21:18.367898 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-09 00:21:19.127343 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:19.127445 | orchestrator | 2026-04-09 00:21:19.127461 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-09 00:21:22.202590 | orchestrator | ok: [testbed-manager] 2026-04-09 00:21:22.202685 | orchestrator | 2026-04-09 00:21:22.202702 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-09 00:21:22.274784 | orchestrator | ok: [testbed-manager] => { 2026-04-09 00:21:22.274890 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-09 00:21:22.274906 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-09 00:21:22.274918 | orchestrator | "Checking running containers against expected versions...", 2026-04-09 00:21:22.274931 | orchestrator | "", 2026-04-09 00:21:22.274945 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-09 00:21:22.274957 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-09 00:21:22.274968 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.274979 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-09 00:21:22.274991 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275002 | orchestrator | "", 2026-04-09 00:21:22.275013 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-09 00:21:22.275025 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-09 00:21:22.275036 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275048 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-09 00:21:22.275102 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275113 | orchestrator | "", 2026-04-09 00:21:22.275124 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-09 00:21:22.275135 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-09 00:21:22.275146 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275157 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-09 00:21:22.275168 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275179 | orchestrator | "", 2026-04-09 00:21:22.275190 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-09 00:21:22.275201 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-09 00:21:22.275212 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275224 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-09 00:21:22.275235 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275246 | orchestrator | "", 2026-04-09 00:21:22.275257 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-09 00:21:22.275268 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-09 00:21:22.275306 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275320 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-09 00:21:22.275334 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275347 | orchestrator | "", 2026-04-09 00:21:22.275360 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-09 00:21:22.275373 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.275386 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275400 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.275413 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275427 | orchestrator | "", 2026-04-09 00:21:22.275439 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-09 00:21:22.275452 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-09 00:21:22.275465 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275478 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-09 00:21:22.275491 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275504 | orchestrator | "", 2026-04-09 00:21:22.275517 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-09 00:21:22.275530 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-09 00:21:22.275543 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275556 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-09 00:21:22.275569 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275581 | orchestrator | "", 2026-04-09 00:21:22.275603 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-09 00:21:22.275616 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-09 00:21:22.275633 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275647 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-09 00:21:22.275660 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275671 | orchestrator | "", 2026-04-09 00:21:22.275682 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-09 00:21:22.275693 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-09 00:21:22.275704 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275715 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-09 00:21:22.275726 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275737 | orchestrator | "", 2026-04-09 00:21:22.275748 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-09 00:21:22.275759 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.275770 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275780 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.275791 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275802 | orchestrator | "", 2026-04-09 00:21:22.275813 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-09 00:21:22.275823 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.275834 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275845 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.275856 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275867 | orchestrator | "", 2026-04-09 00:21:22.275877 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-09 00:21:22.275888 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.275899 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275910 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.275921 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.275931 | orchestrator | "", 2026-04-09 00:21:22.275942 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-09 00:21:22.275953 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.275964 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.275974 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.275985 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.276002 | orchestrator | "", 2026-04-09 00:21:22.276013 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-09 00:21:22.276042 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.276073 | orchestrator | " Enabled: true", 2026-04-09 00:21:22.276084 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-09 00:21:22.276095 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:21:22.276106 | orchestrator | "", 2026-04-09 00:21:22.276117 | orchestrator | "=== Summary ===", 2026-04-09 00:21:22.276128 | orchestrator | "Errors (version mismatches): 0", 2026-04-09 00:21:22.276138 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-09 00:21:22.276149 | orchestrator | "", 2026-04-09 00:21:22.276160 | orchestrator | "✅ All running containers match expected versions!" 2026-04-09 00:21:22.276171 | orchestrator | ] 2026-04-09 00:21:22.276182 | orchestrator | } 2026-04-09 00:21:22.276193 | orchestrator | 2026-04-09 00:21:22.276205 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-09 00:21:22.328337 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:22.328422 | orchestrator | 2026-04-09 00:21:22.328434 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:21:22.328447 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-09 00:21:22.328458 | orchestrator | 2026-04-09 00:21:22.416631 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 00:21:22.416731 | orchestrator | + deactivate 2026-04-09 00:21:22.416746 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-09 00:21:22.416762 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:21:22.416774 | orchestrator | + export PATH 2026-04-09 00:21:22.416785 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-09 00:21:22.416796 | orchestrator | + '[' -n '' ']' 2026-04-09 00:21:22.416807 | orchestrator | + hash -r 2026-04-09 00:21:22.416818 | orchestrator | + '[' -n '' ']' 2026-04-09 00:21:22.416828 | orchestrator | + unset VIRTUAL_ENV 2026-04-09 00:21:22.416838 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-09 00:21:22.416849 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-09 00:21:22.416860 | orchestrator | + unset -f deactivate 2026-04-09 00:21:22.416871 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-09 00:21:22.422917 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 00:21:22.422988 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-09 00:21:22.423004 | orchestrator | + local max_attempts=60 2026-04-09 00:21:22.423016 | orchestrator | + local name=ceph-ansible 2026-04-09 00:21:22.423027 | orchestrator | + local attempt_num=1 2026-04-09 00:21:22.423888 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:21:22.461243 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:21:22.461316 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-09 00:21:22.461331 | orchestrator | + local max_attempts=60 2026-04-09 00:21:22.461344 | orchestrator | + local name=kolla-ansible 2026-04-09 00:21:22.461355 | orchestrator | + local attempt_num=1 2026-04-09 00:21:22.461703 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-09 00:21:22.493492 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:21:22.493572 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-09 00:21:22.493628 | orchestrator | + local max_attempts=60 2026-04-09 00:21:22.493637 | orchestrator | + local name=osism-ansible 2026-04-09 00:21:22.493645 | orchestrator | + local attempt_num=1 2026-04-09 00:21:22.493757 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-09 00:21:22.523484 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:21:22.523564 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 00:21:22.523574 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-09 00:21:23.233156 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-09 00:21:23.398797 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-09 00:21:23.398918 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-09 00:21:23.398934 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-09 00:21:23.398946 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-09 00:21:23.398959 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-09 00:21:23.398970 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-09 00:21:23.398981 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-09 00:21:23.398992 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-04-09 00:21:23.399020 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-09 00:21:23.399031 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-09 00:21:23.399042 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-09 00:21:23.399083 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-09 00:21:23.399095 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-09 00:21:23.399106 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-09 00:21:23.399117 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-09 00:21:23.399128 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-09 00:21:23.404377 | orchestrator | ++ semver latest 7.0.0 2026-04-09 00:21:23.442448 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:21:23.442531 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 00:21:23.442545 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-09 00:21:23.446210 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-09 00:21:35.955313 | orchestrator | 2026-04-09 00:21:35 | INFO  | Prepare task for execution of resolvconf. 2026-04-09 00:21:36.171937 | orchestrator | 2026-04-09 00:21:36 | INFO  | Task e3c539c3-596a-4706-a6ae-6063786727ef (resolvconf) was prepared for execution. 2026-04-09 00:21:36.172104 | orchestrator | 2026-04-09 00:21:36 | INFO  | It takes a moment until task e3c539c3-596a-4706-a6ae-6063786727ef (resolvconf) has been started and output is visible here. 2026-04-09 00:21:49.031632 | orchestrator | 2026-04-09 00:21:49.031753 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-09 00:21:49.031772 | orchestrator | 2026-04-09 00:21:49.031785 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:21:49.031797 | orchestrator | Thursday 09 April 2026 00:21:39 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-04-09 00:21:49.031808 | orchestrator | ok: [testbed-manager] 2026-04-09 00:21:49.031820 | orchestrator | 2026-04-09 00:21:49.031832 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-09 00:21:49.031844 | orchestrator | Thursday 09 April 2026 00:21:42 +0000 (0:00:03.687) 0:00:03.873 ******** 2026-04-09 00:21:49.031855 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:49.031872 | orchestrator | 2026-04-09 00:21:49.031891 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-09 00:21:49.031909 | orchestrator | Thursday 09 April 2026 00:21:43 +0000 (0:00:00.064) 0:00:03.938 ******** 2026-04-09 00:21:49.031926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-09 00:21:49.031943 | orchestrator | 2026-04-09 00:21:49.031960 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-09 00:21:49.031978 | orchestrator | Thursday 09 April 2026 00:21:43 +0000 (0:00:00.077) 0:00:04.016 ******** 2026-04-09 00:21:49.032008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:21:49.032027 | orchestrator | 2026-04-09 00:21:49.032080 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-09 00:21:49.032101 | orchestrator | Thursday 09 April 2026 00:21:43 +0000 (0:00:00.073) 0:00:04.089 ******** 2026-04-09 00:21:49.032120 | orchestrator | ok: [testbed-manager] 2026-04-09 00:21:49.032140 | orchestrator | 2026-04-09 00:21:49.032158 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-09 00:21:49.032177 | orchestrator | Thursday 09 April 2026 00:21:44 +0000 (0:00:01.133) 0:00:05.223 ******** 2026-04-09 00:21:49.032195 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:49.032212 | orchestrator | 2026-04-09 00:21:49.032229 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-09 00:21:49.032247 | orchestrator | Thursday 09 April 2026 00:21:44 +0000 (0:00:00.053) 0:00:05.276 ******** 2026-04-09 00:21:49.032265 | orchestrator | ok: [testbed-manager] 2026-04-09 00:21:49.032284 | orchestrator | 2026-04-09 00:21:49.032303 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-09 00:21:49.032323 | orchestrator | Thursday 09 April 2026 00:21:44 +0000 (0:00:00.576) 0:00:05.853 ******** 2026-04-09 00:21:49.032341 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:49.032360 | orchestrator | 2026-04-09 00:21:49.032378 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-09 00:21:49.032398 | orchestrator | Thursday 09 April 2026 00:21:45 +0000 (0:00:00.079) 0:00:05.932 ******** 2026-04-09 00:21:49.032417 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:49.032435 | orchestrator | 2026-04-09 00:21:49.032453 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-09 00:21:49.032474 | orchestrator | Thursday 09 April 2026 00:21:45 +0000 (0:00:00.560) 0:00:06.493 ******** 2026-04-09 00:21:49.032492 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:49.032509 | orchestrator | 2026-04-09 00:21:49.032521 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-09 00:21:49.032533 | orchestrator | Thursday 09 April 2026 00:21:46 +0000 (0:00:01.063) 0:00:07.557 ******** 2026-04-09 00:21:49.032543 | orchestrator | ok: [testbed-manager] 2026-04-09 00:21:49.032554 | orchestrator | 2026-04-09 00:21:49.032590 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-09 00:21:49.032609 | orchestrator | Thursday 09 April 2026 00:21:47 +0000 (0:00:00.978) 0:00:08.535 ******** 2026-04-09 00:21:49.032628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-09 00:21:49.032647 | orchestrator | 2026-04-09 00:21:49.032665 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-09 00:21:49.032684 | orchestrator | Thursday 09 April 2026 00:21:47 +0000 (0:00:00.081) 0:00:08.617 ******** 2026-04-09 00:21:49.032701 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:49.032718 | orchestrator | 2026-04-09 00:21:49.032729 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:21:49.032741 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:21:49.032752 | orchestrator | 2026-04-09 00:21:49.032762 | orchestrator | 2026-04-09 00:21:49.032773 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:21:49.032784 | orchestrator | Thursday 09 April 2026 00:21:48 +0000 (0:00:01.141) 0:00:09.758 ******** 2026-04-09 00:21:49.032794 | orchestrator | =============================================================================== 2026-04-09 00:21:49.032805 | orchestrator | Gathering Facts --------------------------------------------------------- 3.69s 2026-04-09 00:21:49.032816 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-04-09 00:21:49.032826 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2026-04-09 00:21:49.032837 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.06s 2026-04-09 00:21:49.032848 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2026-04-09 00:21:49.032858 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.58s 2026-04-09 00:21:49.032888 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-04-09 00:21:49.032900 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-09 00:21:49.032911 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-09 00:21:49.032922 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-09 00:21:49.032932 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-04-09 00:21:49.032943 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-09 00:21:49.032954 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-04-09 00:21:49.199829 | orchestrator | + osism apply sshconfig 2026-04-09 00:22:00.560823 | orchestrator | 2026-04-09 00:22:00 | INFO  | Prepare task for execution of sshconfig. 2026-04-09 00:22:00.641772 | orchestrator | 2026-04-09 00:22:00 | INFO  | Task 60098913-39a1-49e5-b8f2-8e64204cb22f (sshconfig) was prepared for execution. 2026-04-09 00:22:00.641873 | orchestrator | 2026-04-09 00:22:00 | INFO  | It takes a moment until task 60098913-39a1-49e5-b8f2-8e64204cb22f (sshconfig) has been started and output is visible here. 2026-04-09 00:22:11.502931 | orchestrator | 2026-04-09 00:22:11.503075 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-09 00:22:11.503094 | orchestrator | 2026-04-09 00:22:11.503106 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-09 00:22:11.503117 | orchestrator | Thursday 09 April 2026 00:22:03 +0000 (0:00:00.188) 0:00:00.188 ******** 2026-04-09 00:22:11.503128 | orchestrator | ok: [testbed-manager] 2026-04-09 00:22:11.503140 | orchestrator | 2026-04-09 00:22:11.503151 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-09 00:22:11.503162 | orchestrator | Thursday 09 April 2026 00:22:04 +0000 (0:00:00.879) 0:00:01.068 ******** 2026-04-09 00:22:11.503199 | orchestrator | changed: [testbed-manager] 2026-04-09 00:22:11.503211 | orchestrator | 2026-04-09 00:22:11.503222 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-09 00:22:11.503233 | orchestrator | Thursday 09 April 2026 00:22:05 +0000 (0:00:00.533) 0:00:01.602 ******** 2026-04-09 00:22:11.503244 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-09 00:22:11.503255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-09 00:22:11.503266 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-09 00:22:11.503276 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-09 00:22:11.503287 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-09 00:22:11.503298 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-09 00:22:11.503308 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-09 00:22:11.503319 | orchestrator | 2026-04-09 00:22:11.503330 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-09 00:22:11.503340 | orchestrator | Thursday 09 April 2026 00:22:10 +0000 (0:00:05.547) 0:00:07.149 ******** 2026-04-09 00:22:11.503351 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:22:11.503362 | orchestrator | 2026-04-09 00:22:11.503373 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-09 00:22:11.503383 | orchestrator | Thursday 09 April 2026 00:22:10 +0000 (0:00:00.112) 0:00:07.262 ******** 2026-04-09 00:22:11.503394 | orchestrator | changed: [testbed-manager] 2026-04-09 00:22:11.503405 | orchestrator | 2026-04-09 00:22:11.503416 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:22:11.503428 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:22:11.503439 | orchestrator | 2026-04-09 00:22:11.503450 | orchestrator | 2026-04-09 00:22:11.503461 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:22:11.503472 | orchestrator | Thursday 09 April 2026 00:22:11 +0000 (0:00:00.533) 0:00:07.795 ******** 2026-04-09 00:22:11.503485 | orchestrator | =============================================================================== 2026-04-09 00:22:11.503498 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.55s 2026-04-09 00:22:11.503511 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.88s 2026-04-09 00:22:11.503523 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-04-09 00:22:11.503536 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.53s 2026-04-09 00:22:11.503549 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.11s 2026-04-09 00:22:11.692357 | orchestrator | + osism apply known-hosts 2026-04-09 00:22:23.052274 | orchestrator | 2026-04-09 00:22:23 | INFO  | Prepare task for execution of known-hosts. 2026-04-09 00:22:23.134461 | orchestrator | 2026-04-09 00:22:23 | INFO  | Task c6ff692f-632b-41d2-a705-e0a538e077af (known-hosts) was prepared for execution. 2026-04-09 00:22:23.134561 | orchestrator | 2026-04-09 00:22:23 | INFO  | It takes a moment until task c6ff692f-632b-41d2-a705-e0a538e077af (known-hosts) has been started and output is visible here. 2026-04-09 00:22:38.409900 | orchestrator | 2026-04-09 00:22:38.410110 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-09 00:22:38.410145 | orchestrator | 2026-04-09 00:22:38.410166 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-09 00:22:38.410185 | orchestrator | Thursday 09 April 2026 00:22:26 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-04-09 00:22:38.410198 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-09 00:22:38.410210 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-09 00:22:38.410221 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-09 00:22:38.410260 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-09 00:22:38.410271 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-09 00:22:38.410282 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-09 00:22:38.410292 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-09 00:22:38.410303 | orchestrator | 2026-04-09 00:22:38.410315 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-09 00:22:38.410327 | orchestrator | Thursday 09 April 2026 00:22:32 +0000 (0:00:06.358) 0:00:06.546 ******** 2026-04-09 00:22:38.410351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-09 00:22:38.410365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-09 00:22:38.410377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-09 00:22:38.410388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-09 00:22:38.410399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-09 00:22:38.410413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-09 00:22:38.410426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-09 00:22:38.410439 | orchestrator | 2026-04-09 00:22:38.410451 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:38.410465 | orchestrator | Thursday 09 April 2026 00:22:32 +0000 (0:00:00.162) 0:00:06.708 ******** 2026-04-09 00:22:38.410478 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE6nM/j4yGPFZcCH9sNyDmCgK4HIqa8yOWxUs/zlXkIhkQqyEIA0KLdDfYyRWxyHmtPl4guRrWrJn1XZEz5w+Hs=) 2026-04-09 00:22:38.410497 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClWmFH1ncxFIjxUYZWDoaNAHXZxuknRGcauiETU+WKo/EVykXCorwLEiE/wjEnIKJpSgiHAVI6wjXCC1zAQ8TP7lkkOZlFDbdgzzpAOhEIe5CP2Xqwq9diYR8xYuHf08NF0DEWk41MhuRKsEttM4FMaoqhkyBsFnkIYDpEGYm+1+TX4VOL5U4Bt6wYdOss098x1cAR3fnJ0myv1M4cDUMzvgnolfrJdDaiLFexcwz0MRhXHy9dY4Q68tnxU1ZN2ZIRd9go2QjpjUoyDsuciMg5FFFGBIsV5alWYHdXxPSXOFaeZe+B6A1Z5iP0W7no9CYIOMOarqr7gPTR8apsMGFAyPbvYQZoABLJw6Z1DLcxd8tPRYfUNR6TnH2qIGqG6bnXumX/SJscWb613NKxuV8dYoatbICpS64Jkp8gN3MhwramzqBA370menT+BQA24JmAHLPOHMD54Wb/pUbb9MjZmf+SWkzwPtvtx2z9qyVDRX820ORqQyvPGRreqqbUh1M=) 2026-04-09 00:22:38.410514 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ0xiSUxx+Htc2Zhy0eNt58OZc562yrCFpBbCf5cK5Lb) 2026-04-09 00:22:38.410529 | orchestrator | 2026-04-09 00:22:38.410542 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:38.410555 | orchestrator | Thursday 09 April 2026 00:22:33 +0000 (0:00:01.205) 0:00:07.914 ******** 2026-04-09 00:22:38.410596 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNqSvIk1+taJAwyorsVsgQTdISOlV2iX7Og3y0dA3e9KxqExh4LWA17/ZIyv5ssuzbO8+KuAawahzSAzLTCAyQDb/bG+jcIzWP2YTYp/ikRmtM0UIeFBtLNOmCjC8AsGLU3EWmri97dpBcYHj36hmlhQfYRLkb881+gRg5bR8mNfCWCJcU4gkTL2lt4bkKX/4GNtjZ0pxpkSHZ65gcMPzAPtuajoqJ4yGPmp5NxLW7nODa7BH6UazF54k/XMBb/qtQs7V6BgKBppZ/08CpwczaOlDAWO2fGjE7IbzQWykVsDdxUAC5ZhueZEKCqdSNHcmjWOXd3SgtMn8TpWWkMWIelJV+ndF5Va/OU3gCCYYRlUnwzTrdM+owmcBo0Ct4W2FN9YtsdZ8Ac+tmLsZdfbSS0/tAIfySsYFHKIlYdbqQStEiBkqpQJwc4ot/U2wfWRrC56tHgIRQ0nl2X95z3jiH/buA0Hw/+S+Ln41J8+viqI4tzTuSV0LOxFtgIPKmR+0=) 2026-04-09 00:22:38.410620 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIJ9tY80u2PeW9FaZKV5xcaDCZqQp9qyYJU7BHHUJI7I8lvbnLYuqXf6kdPOZ6laQBhcWn8htrcna31TyJtntKM=) 2026-04-09 00:22:38.410634 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAnpey1zmHWzHvpFIJ48G74lgZFI2D9czuzD4IRCrFky) 2026-04-09 00:22:38.410647 | orchestrator | 2026-04-09 00:22:38.410660 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:38.410673 | orchestrator | Thursday 09 April 2026 00:22:34 +0000 (0:00:00.962) 0:00:08.877 ******** 2026-04-09 00:22:38.410686 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXBosj6hIK6vae2WbQzDWafQT1NEb1JMOFzMtaD51wEeAUAQuTl2GJ3uco6LXQ746QJgmEho2GfAh2Q2VCNaTZFdAQDhf8oSu7/w3ZJ+Q7KxvGEHj3nv9FqjdNVyNfWrqyeSrXHVtRfgInK8P8vu60g/AaX5IKEQ4/4Fkffg/1Q0Pso37EysJDu7Rk0Y7Oj6iQVFvK1d1y0ZrQDItSSGafH1Zl0XNf5kvq5dnp43pn4ghd7eCgYQPjA6m+ciIllkSEkBUruhAtwuhZOENNfZf49jK9upLo/bpUmeOugxwbDSquyULxO5JV6y7ogAWw42br9pi0CzZrKUXzQyNPydxjHdRC8sYjntLIbyD9cdBcDFDzo3idUmFC/K2g3eX70nx3rQxkuVywGgLxaSoxFg7zcyT5V8sIGihHmkRjBf5vncqt+ybaZwco4ML6CszpnSaq7mmOmC/7pWGBRT1yOBqn8p+sd6FKNZjK+YBX6GQ2kIPh4NDfhhigssxRFxSqBck=) 2026-04-09 00:22:38.410699 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNMbgwjA29Qeho6bjwU5Pd0ztuE4F0uqJA2SBDYGuTLwoF2wGO79OQew2q9z2sWkWcl5ppWyCcSEIz1W4qpB6F8=) 2026-04-09 00:22:38.410783 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILfzIy+XPB25P0xs/rYN61TkKq2w/f/wYW6LA2UuIe1e) 2026-04-09 00:22:38.410797 | orchestrator | 2026-04-09 00:22:38.410809 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:38.410820 | orchestrator | Thursday 09 April 2026 00:22:35 +0000 (0:00:01.040) 0:00:09.917 ******** 2026-04-09 00:22:38.410831 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDQ/4c8x/XRIS5LPbCtbHv0bu+jTRZiMZ1mRaw7QZNwO) 2026-04-09 00:22:38.410842 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjPeLjkfYHyFe8B3KZdeN4G/Qw1vM7noI+kQUHaQSkmFzSF0RlevEZQ+IONb92a3iZ8UXQGpxUAn2ilw5En9IxR0GVjSH2y56hpu/PhuetfDhiy2neA3GegSrBG0MFDkH6+LYs+dUv0wqT4LUnmYPAaBdb1aG9j9XEC2PmHDTGoypO0+wBwoSVK43n/ArtEKRe0bQBSy76Si3epParfbGA8z1jtkHWYak/5zREtwz56YBdGKZ7+heGU1ZfyzcIInzr/fmyhbMR6IYJ/ngqjCHSDjcS+0mvFMV2ETzdSKRsxtmD7cFiAxSFq5SfMj63otMOt0sSs+agrLWLuM8kvI3zd2gyrI2sQzdsCuzrNrqfCgbodVRG+QDFF9FcemaPRTisJZs+cUEn0/xgvJYmOCOe3ppdI3oDvYX3FNQnwd0tyIK3uSPncTrASxUBE00VUl4c8eFWG1cA8nVdCXSE17EWQFK7nJTGMkSUOXpBSZ7UiEuLsS3b9fd/ts8+niSuIxE=) 2026-04-09 00:22:38.410854 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGztr6nGrYS60av/3GDkP2MIO6T89W4aASmtwCjibS55sgS4gJIfs8XLXNj4+oi/CTlZAGQdvQHL7JC5/spQRGE=) 2026-04-09 00:22:38.410865 | orchestrator | 2026-04-09 00:22:38.410875 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:38.410886 | orchestrator | Thursday 09 April 2026 00:22:36 +0000 (0:00:01.004) 0:00:10.921 ******** 2026-04-09 00:22:38.410897 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFmwk7dVyHzok4Atskk4bg9aSihnthNWVCMeaHyxdcokebjGHsJ2nWZIh3RilZh/REv56X6ZSMz4weq9NdNZLQk=) 2026-04-09 00:22:38.410909 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC29gw3+xBMtBIMT071a377k8eDPB/wuFxg2EzdmlA2tmJvThXhOD2fKxUOUdADrvWk4fVWVqQAVc9+c0qGi6nq+1L1L/JFfhoSEYGvvSE0oq4TcQPPvSWcwkWkh78PCIleLK8Z8vVcRhVnjq3DPDf1c6vzOo2MsFUAcLj5By1s2YBSrSGIDo35sJbhvpJd6tKQmEoAEfk6wF4R9MfOT528TwGIs+M1jW1kZS+7l8+x4NbkTO8Ow/F7Bv70i8Ess6gwB0PutHPhCN7oVBmmgjHFQyLIJPOcJDUUgPPOuTx3WSyTQT5h4zTlSc7RM2TIM3D215bmFOOxujQ37jsJWxplaqTX6LtvwQJ1HQhw9/vTQe95O0GmffSWiVO9FwBGlusrfjOW16VCQ01Fdi4709WGsLB9r6OLsMUsusqyrm+SSTJiQUA0AbI4/PYMW1LIKpdiRfHt+b0urOW0zZHtezEDg+8M3nYKBrNL2ZRp02zuD8O9DaCk6md2goG9W8nUuKc=) 2026-04-09 00:22:38.410928 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJuoWbEqevmDA0osKMw58airsKeUt+o2uyO+rYIVDjen) 2026-04-09 00:22:38.410939 | orchestrator | 2026-04-09 00:22:38.410950 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:38.410960 | orchestrator | Thursday 09 April 2026 00:22:38 +0000 (0:00:01.050) 0:00:11.972 ******** 2026-04-09 00:22:38.410982 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBaMoVFMeE7Qui1q/sn9kJrhe163FUNW+4xoShHmFWDfdy3mTMtPny3XuHTez6OKTraZ285rPqIHyBWtXuQnQKDQTqDUWKsy5pfBUxMZY0dU+wkaGu8OMtTxck/gmA09VJ0o6A8zA4kPHpPWtKIFUMtZidjOKrgBTWKoweK2GumySmxZCeeHBjypj7oEJYn3Hwp2drmkbNrp4O48DK0+0QTuhuTyGcumKSoMNdLgBXfwswDd6Ty7NVHj2uD97AR+zk/zCvmno5xiYB6V+vcIRSicpU9GcGsYfPkfNQb1PbZBY6u3iLPH6Ymli18b90SgmllGBzQl2GdgvvwNjQqadAFZvhc7rbYfrbC7xx4a5p7qd03MvZahGtliC8J2lFXenLw/942PbAZZXwH6alM1kebXZdoD52k/8Nuu+UHds9v2hIWkqtWnXom8Ksmu5Apdzs/8pckJznvLV1Kj8AICx37xAHRR/WEYv55cYNAaRiAOHUvesaOmGoHwzDmoptYVM=) 2026-04-09 00:22:49.524363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFR3DsdqTa+JxbybJm/hF31GEhL5ysb1KmcbUta1VmoI) 2026-04-09 00:22:49.524471 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHmHcMIWyOqudBT1QzQQPMeSuzsM+OY0c5Q/3YV52TXsu8g+aVE9nc9nwDVHJcxas4+RVv8FRv1zY3srA0QIIWI=) 2026-04-09 00:22:49.524488 | orchestrator | 2026-04-09 00:22:49.524501 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:49.524513 | orchestrator | Thursday 09 April 2026 00:22:39 +0000 (0:00:01.038) 0:00:13.011 ******** 2026-04-09 00:22:49.524524 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINh96PA7JgZS+RtE1cRTb9ripTFMDoHmEpcrn0eAKmq/) 2026-04-09 00:22:49.524538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDyVtl7V9Vml0g+TYfq9JHD2obKhG87VRdesjW9xEtNxEz/GMhwBYO3iFUdUkHEUSpXvCyY9aTUCAC/bBcl5kcgwVR6NXOCcfAl7vJdklmaJ+9E+12rNUGn9MeFuRJmtvu2QvZcscyM3T+/HPSWEbvzz8jln7+ht8LW4ggDxy2455R7Kd/kK0bp9SelqDMft63V5shlAz73MxGw/NM2YEHJ9YY56kH3DcVay8HOJSIRIk1XmYdIl+mPAtSZCYAIw8Glej4ObfDnyqBDc2DeAE06vaxWsaELHMuuQBxhnoxtBA2nE48k8sgjltIHpjDRDsW9PMeYwJDZEoUl6Y7Qcs0mnO8RewATfEm56A3BaD2LOl9PCMxe0cIUzreoyHWCUYes7FabMnR6PT3WDDfjD9YtQ6M3ZXAoFCnWIJWOALrgjiQOW9ezzP26PXcJTRttdV+vS2t445xhU2qPh7sM+ExW8jGxGDtm0zvtrNUcDjjqXkCMhWdxcJB2nKf9D4zTkAE=) 2026-04-09 00:22:49.524552 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOTwI79CXQMKmgh8gr69/KPbd9IdRRlKCcs+rUTRVK9Asc+dXvuz7gd7ns+no64GrqedSsu6vi+HHGSXkBC5pvM=) 2026-04-09 00:22:49.524563 | orchestrator | 2026-04-09 00:22:49.524574 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-09 00:22:49.524585 | orchestrator | Thursday 09 April 2026 00:22:40 +0000 (0:00:01.026) 0:00:14.037 ******** 2026-04-09 00:22:49.524597 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-09 00:22:49.524609 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-09 00:22:49.524620 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-09 00:22:49.524631 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-09 00:22:49.524641 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-09 00:22:49.524672 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-09 00:22:49.524684 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-09 00:22:49.524721 | orchestrator | 2026-04-09 00:22:49.524733 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-09 00:22:49.524745 | orchestrator | Thursday 09 April 2026 00:22:45 +0000 (0:00:05.214) 0:00:19.252 ******** 2026-04-09 00:22:49.524756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-09 00:22:49.524770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-09 00:22:49.524780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-09 00:22:49.524791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-09 00:22:49.524802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-09 00:22:49.524813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-09 00:22:49.524823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-09 00:22:49.524834 | orchestrator | 2026-04-09 00:22:49.524845 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:49.524856 | orchestrator | Thursday 09 April 2026 00:22:45 +0000 (0:00:00.164) 0:00:19.416 ******** 2026-04-09 00:22:49.524866 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ0xiSUxx+Htc2Zhy0eNt58OZc562yrCFpBbCf5cK5Lb) 2026-04-09 00:22:49.524904 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClWmFH1ncxFIjxUYZWDoaNAHXZxuknRGcauiETU+WKo/EVykXCorwLEiE/wjEnIKJpSgiHAVI6wjXCC1zAQ8TP7lkkOZlFDbdgzzpAOhEIe5CP2Xqwq9diYR8xYuHf08NF0DEWk41MhuRKsEttM4FMaoqhkyBsFnkIYDpEGYm+1+TX4VOL5U4Bt6wYdOss098x1cAR3fnJ0myv1M4cDUMzvgnolfrJdDaiLFexcwz0MRhXHy9dY4Q68tnxU1ZN2ZIRd9go2QjpjUoyDsuciMg5FFFGBIsV5alWYHdXxPSXOFaeZe+B6A1Z5iP0W7no9CYIOMOarqr7gPTR8apsMGFAyPbvYQZoABLJw6Z1DLcxd8tPRYfUNR6TnH2qIGqG6bnXumX/SJscWb613NKxuV8dYoatbICpS64Jkp8gN3MhwramzqBA370menT+BQA24JmAHLPOHMD54Wb/pUbb9MjZmf+SWkzwPtvtx2z9qyVDRX820ORqQyvPGRreqqbUh1M=) 2026-04-09 00:22:49.524917 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE6nM/j4yGPFZcCH9sNyDmCgK4HIqa8yOWxUs/zlXkIhkQqyEIA0KLdDfYyRWxyHmtPl4guRrWrJn1XZEz5w+Hs=) 2026-04-09 00:22:49.524928 | orchestrator | 2026-04-09 00:22:49.524939 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:49.524950 | orchestrator | Thursday 09 April 2026 00:22:46 +0000 (0:00:01.049) 0:00:20.466 ******** 2026-04-09 00:22:49.524961 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIJ9tY80u2PeW9FaZKV5xcaDCZqQp9qyYJU7BHHUJI7I8lvbnLYuqXf6kdPOZ6laQBhcWn8htrcna31TyJtntKM=) 2026-04-09 00:22:49.524973 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNqSvIk1+taJAwyorsVsgQTdISOlV2iX7Og3y0dA3e9KxqExh4LWA17/ZIyv5ssuzbO8+KuAawahzSAzLTCAyQDb/bG+jcIzWP2YTYp/ikRmtM0UIeFBtLNOmCjC8AsGLU3EWmri97dpBcYHj36hmlhQfYRLkb881+gRg5bR8mNfCWCJcU4gkTL2lt4bkKX/4GNtjZ0pxpkSHZ65gcMPzAPtuajoqJ4yGPmp5NxLW7nODa7BH6UazF54k/XMBb/qtQs7V6BgKBppZ/08CpwczaOlDAWO2fGjE7IbzQWykVsDdxUAC5ZhueZEKCqdSNHcmjWOXd3SgtMn8TpWWkMWIelJV+ndF5Va/OU3gCCYYRlUnwzTrdM+owmcBo0Ct4W2FN9YtsdZ8Ac+tmLsZdfbSS0/tAIfySsYFHKIlYdbqQStEiBkqpQJwc4ot/U2wfWRrC56tHgIRQ0nl2X95z3jiH/buA0Hw/+S+Ln41J8+viqI4tzTuSV0LOxFtgIPKmR+0=) 2026-04-09 00:22:49.524991 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAnpey1zmHWzHvpFIJ48G74lgZFI2D9czuzD4IRCrFky) 2026-04-09 00:22:49.525002 | orchestrator | 2026-04-09 00:22:49.525013 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:49.525024 | orchestrator | Thursday 09 April 2026 00:22:47 +0000 (0:00:01.031) 0:00:21.497 ******** 2026-04-09 00:22:49.525076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXBosj6hIK6vae2WbQzDWafQT1NEb1JMOFzMtaD51wEeAUAQuTl2GJ3uco6LXQ746QJgmEho2GfAh2Q2VCNaTZFdAQDhf8oSu7/w3ZJ+Q7KxvGEHj3nv9FqjdNVyNfWrqyeSrXHVtRfgInK8P8vu60g/AaX5IKEQ4/4Fkffg/1Q0Pso37EysJDu7Rk0Y7Oj6iQVFvK1d1y0ZrQDItSSGafH1Zl0XNf5kvq5dnp43pn4ghd7eCgYQPjA6m+ciIllkSEkBUruhAtwuhZOENNfZf49jK9upLo/bpUmeOugxwbDSquyULxO5JV6y7ogAWw42br9pi0CzZrKUXzQyNPydxjHdRC8sYjntLIbyD9cdBcDFDzo3idUmFC/K2g3eX70nx3rQxkuVywGgLxaSoxFg7zcyT5V8sIGihHmkRjBf5vncqt+ybaZwco4ML6CszpnSaq7mmOmC/7pWGBRT1yOBqn8p+sd6FKNZjK+YBX6GQ2kIPh4NDfhhigssxRFxSqBck=) 2026-04-09 00:22:49.525089 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNMbgwjA29Qeho6bjwU5Pd0ztuE4F0uqJA2SBDYGuTLwoF2wGO79OQew2q9z2sWkWcl5ppWyCcSEIz1W4qpB6F8=) 2026-04-09 00:22:49.525100 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILfzIy+XPB25P0xs/rYN61TkKq2w/f/wYW6LA2UuIe1e) 2026-04-09 00:22:49.525110 | orchestrator | 2026-04-09 00:22:49.525121 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:49.525132 | orchestrator | Thursday 09 April 2026 00:22:48 +0000 (0:00:01.028) 0:00:22.525 ******** 2026-04-09 00:22:49.525150 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjPeLjkfYHyFe8B3KZdeN4G/Qw1vM7noI+kQUHaQSkmFzSF0RlevEZQ+IONb92a3iZ8UXQGpxUAn2ilw5En9IxR0GVjSH2y56hpu/PhuetfDhiy2neA3GegSrBG0MFDkH6+LYs+dUv0wqT4LUnmYPAaBdb1aG9j9XEC2PmHDTGoypO0+wBwoSVK43n/ArtEKRe0bQBSy76Si3epParfbGA8z1jtkHWYak/5zREtwz56YBdGKZ7+heGU1ZfyzcIInzr/fmyhbMR6IYJ/ngqjCHSDjcS+0mvFMV2ETzdSKRsxtmD7cFiAxSFq5SfMj63otMOt0sSs+agrLWLuM8kvI3zd2gyrI2sQzdsCuzrNrqfCgbodVRG+QDFF9FcemaPRTisJZs+cUEn0/xgvJYmOCOe3ppdI3oDvYX3FNQnwd0tyIK3uSPncTrASxUBE00VUl4c8eFWG1cA8nVdCXSE17EWQFK7nJTGMkSUOXpBSZ7UiEuLsS3b9fd/ts8+niSuIxE=) 2026-04-09 00:22:49.525161 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGztr6nGrYS60av/3GDkP2MIO6T89W4aASmtwCjibS55sgS4gJIfs8XLXNj4+oi/CTlZAGQdvQHL7JC5/spQRGE=) 2026-04-09 00:22:49.525183 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDQ/4c8x/XRIS5LPbCtbHv0bu+jTRZiMZ1mRaw7QZNwO) 2026-04-09 00:22:53.589111 | orchestrator | 2026-04-09 00:22:53.589215 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:53.589232 | orchestrator | Thursday 09 April 2026 00:22:49 +0000 (0:00:00.983) 0:00:23.509 ******** 2026-04-09 00:22:53.589265 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC29gw3+xBMtBIMT071a377k8eDPB/wuFxg2EzdmlA2tmJvThXhOD2fKxUOUdADrvWk4fVWVqQAVc9+c0qGi6nq+1L1L/JFfhoSEYGvvSE0oq4TcQPPvSWcwkWkh78PCIleLK8Z8vVcRhVnjq3DPDf1c6vzOo2MsFUAcLj5By1s2YBSrSGIDo35sJbhvpJd6tKQmEoAEfk6wF4R9MfOT528TwGIs+M1jW1kZS+7l8+x4NbkTO8Ow/F7Bv70i8Ess6gwB0PutHPhCN7oVBmmgjHFQyLIJPOcJDUUgPPOuTx3WSyTQT5h4zTlSc7RM2TIM3D215bmFOOxujQ37jsJWxplaqTX6LtvwQJ1HQhw9/vTQe95O0GmffSWiVO9FwBGlusrfjOW16VCQ01Fdi4709WGsLB9r6OLsMUsusqyrm+SSTJiQUA0AbI4/PYMW1LIKpdiRfHt+b0urOW0zZHtezEDg+8M3nYKBrNL2ZRp02zuD8O9DaCk6md2goG9W8nUuKc=) 2026-04-09 00:22:53.589282 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFmwk7dVyHzok4Atskk4bg9aSihnthNWVCMeaHyxdcokebjGHsJ2nWZIh3RilZh/REv56X6ZSMz4weq9NdNZLQk=) 2026-04-09 00:22:53.589323 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJuoWbEqevmDA0osKMw58airsKeUt+o2uyO+rYIVDjen) 2026-04-09 00:22:53.589336 | orchestrator | 2026-04-09 00:22:53.589347 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:53.589358 | orchestrator | Thursday 09 April 2026 00:22:50 +0000 (0:00:01.041) 0:00:24.550 ******** 2026-04-09 00:22:53.589369 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBaMoVFMeE7Qui1q/sn9kJrhe163FUNW+4xoShHmFWDfdy3mTMtPny3XuHTez6OKTraZ285rPqIHyBWtXuQnQKDQTqDUWKsy5pfBUxMZY0dU+wkaGu8OMtTxck/gmA09VJ0o6A8zA4kPHpPWtKIFUMtZidjOKrgBTWKoweK2GumySmxZCeeHBjypj7oEJYn3Hwp2drmkbNrp4O48DK0+0QTuhuTyGcumKSoMNdLgBXfwswDd6Ty7NVHj2uD97AR+zk/zCvmno5xiYB6V+vcIRSicpU9GcGsYfPkfNQb1PbZBY6u3iLPH6Ymli18b90SgmllGBzQl2GdgvvwNjQqadAFZvhc7rbYfrbC7xx4a5p7qd03MvZahGtliC8J2lFXenLw/942PbAZZXwH6alM1kebXZdoD52k/8Nuu+UHds9v2hIWkqtWnXom8Ksmu5Apdzs/8pckJznvLV1Kj8AICx37xAHRR/WEYv55cYNAaRiAOHUvesaOmGoHwzDmoptYVM=) 2026-04-09 00:22:53.589381 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHmHcMIWyOqudBT1QzQQPMeSuzsM+OY0c5Q/3YV52TXsu8g+aVE9nc9nwDVHJcxas4+RVv8FRv1zY3srA0QIIWI=) 2026-04-09 00:22:53.589392 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFR3DsdqTa+JxbybJm/hF31GEhL5ysb1KmcbUta1VmoI) 2026-04-09 00:22:53.589403 | orchestrator | 2026-04-09 00:22:53.589413 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:22:53.589424 | orchestrator | Thursday 09 April 2026 00:22:51 +0000 (0:00:00.992) 0:00:25.543 ******** 2026-04-09 00:22:53.589435 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDyVtl7V9Vml0g+TYfq9JHD2obKhG87VRdesjW9xEtNxEz/GMhwBYO3iFUdUkHEUSpXvCyY9aTUCAC/bBcl5kcgwVR6NXOCcfAl7vJdklmaJ+9E+12rNUGn9MeFuRJmtvu2QvZcscyM3T+/HPSWEbvzz8jln7+ht8LW4ggDxy2455R7Kd/kK0bp9SelqDMft63V5shlAz73MxGw/NM2YEHJ9YY56kH3DcVay8HOJSIRIk1XmYdIl+mPAtSZCYAIw8Glej4ObfDnyqBDc2DeAE06vaxWsaELHMuuQBxhnoxtBA2nE48k8sgjltIHpjDRDsW9PMeYwJDZEoUl6Y7Qcs0mnO8RewATfEm56A3BaD2LOl9PCMxe0cIUzreoyHWCUYes7FabMnR6PT3WDDfjD9YtQ6M3ZXAoFCnWIJWOALrgjiQOW9ezzP26PXcJTRttdV+vS2t445xhU2qPh7sM+ExW8jGxGDtm0zvtrNUcDjjqXkCMhWdxcJB2nKf9D4zTkAE=) 2026-04-09 00:22:53.589446 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOTwI79CXQMKmgh8gr69/KPbd9IdRRlKCcs+rUTRVK9Asc+dXvuz7gd7ns+no64GrqedSsu6vi+HHGSXkBC5pvM=) 2026-04-09 00:22:53.589457 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINh96PA7JgZS+RtE1cRTb9ripTFMDoHmEpcrn0eAKmq/) 2026-04-09 00:22:53.589468 | orchestrator | 2026-04-09 00:22:53.589479 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-09 00:22:53.589489 | orchestrator | Thursday 09 April 2026 00:22:52 +0000 (0:00:01.043) 0:00:26.586 ******** 2026-04-09 00:22:53.589501 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-09 00:22:53.589512 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 00:22:53.589522 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-09 00:22:53.589533 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-09 00:22:53.589543 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-09 00:22:53.589554 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-09 00:22:53.589564 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-09 00:22:53.589575 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:22:53.589586 | orchestrator | 2026-04-09 00:22:53.589615 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-09 00:22:53.589628 | orchestrator | Thursday 09 April 2026 00:22:52 +0000 (0:00:00.183) 0:00:26.770 ******** 2026-04-09 00:22:53.589648 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:22:53.589661 | orchestrator | 2026-04-09 00:22:53.589674 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-09 00:22:53.589686 | orchestrator | Thursday 09 April 2026 00:22:52 +0000 (0:00:00.051) 0:00:26.822 ******** 2026-04-09 00:22:53.589698 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:22:53.589711 | orchestrator | 2026-04-09 00:22:53.589723 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-09 00:22:53.589735 | orchestrator | Thursday 09 April 2026 00:22:52 +0000 (0:00:00.060) 0:00:26.883 ******** 2026-04-09 00:22:53.589748 | orchestrator | changed: [testbed-manager] 2026-04-09 00:22:53.589760 | orchestrator | 2026-04-09 00:22:53.589772 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:22:53.589785 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:22:53.589798 | orchestrator | 2026-04-09 00:22:53.589811 | orchestrator | 2026-04-09 00:22:53.589823 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:22:53.589836 | orchestrator | Thursday 09 April 2026 00:22:53 +0000 (0:00:00.475) 0:00:27.359 ******** 2026-04-09 00:22:53.589848 | orchestrator | =============================================================================== 2026-04-09 00:22:53.589860 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.36s 2026-04-09 00:22:53.589872 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.21s 2026-04-09 00:22:53.589884 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-04-09 00:22:53.589896 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-09 00:22:53.589908 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-09 00:22:53.589920 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-09 00:22:53.589932 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-09 00:22:53.589944 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-09 00:22:53.589957 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-09 00:22:53.589968 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-09 00:22:53.589979 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-09 00:22:53.589997 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-09 00:22:53.590008 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-09 00:22:53.590129 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-09 00:22:53.590143 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-04-09 00:22:53.590153 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-04-09 00:22:53.590164 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2026-04-09 00:22:53.590174 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-04-09 00:22:53.590185 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-04-09 00:22:53.590196 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-04-09 00:22:53.756885 | orchestrator | + osism apply squid 2026-04-09 00:23:05.148524 | orchestrator | 2026-04-09 00:23:05 | INFO  | Prepare task for execution of squid. 2026-04-09 00:23:05.220481 | orchestrator | 2026-04-09 00:23:05 | INFO  | Task c86f0c8e-748c-49cd-8c2e-719cebe62104 (squid) was prepared for execution. 2026-04-09 00:23:05.220576 | orchestrator | 2026-04-09 00:23:05 | INFO  | It takes a moment until task c86f0c8e-748c-49cd-8c2e-719cebe62104 (squid) has been started and output is visible here. 2026-04-09 00:24:57.735171 | orchestrator | 2026-04-09 00:24:57.735262 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-09 00:24:57.735276 | orchestrator | 2026-04-09 00:24:57.735285 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-09 00:24:57.735293 | orchestrator | Thursday 09 April 2026 00:23:08 +0000 (0:00:00.191) 0:00:00.191 ******** 2026-04-09 00:24:57.735301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:24:57.735309 | orchestrator | 2026-04-09 00:24:57.735316 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-09 00:24:57.735324 | orchestrator | Thursday 09 April 2026 00:23:08 +0000 (0:00:00.078) 0:00:00.270 ******** 2026-04-09 00:24:57.735331 | orchestrator | ok: [testbed-manager] 2026-04-09 00:24:57.735339 | orchestrator | 2026-04-09 00:24:57.735346 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-09 00:24:57.735353 | orchestrator | Thursday 09 April 2026 00:23:10 +0000 (0:00:02.259) 0:00:02.529 ******** 2026-04-09 00:24:57.735361 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-09 00:24:57.735368 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-09 00:24:57.735375 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-09 00:24:57.735382 | orchestrator | 2026-04-09 00:24:57.735390 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-09 00:24:57.735397 | orchestrator | Thursday 09 April 2026 00:23:11 +0000 (0:00:01.221) 0:00:03.751 ******** 2026-04-09 00:24:57.735404 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-09 00:24:57.735411 | orchestrator | 2026-04-09 00:24:57.735418 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-09 00:24:57.735425 | orchestrator | Thursday 09 April 2026 00:23:12 +0000 (0:00:01.016) 0:00:04.767 ******** 2026-04-09 00:24:57.735432 | orchestrator | ok: [testbed-manager] 2026-04-09 00:24:57.735440 | orchestrator | 2026-04-09 00:24:57.735447 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-09 00:24:57.735470 | orchestrator | Thursday 09 April 2026 00:23:13 +0000 (0:00:00.346) 0:00:05.114 ******** 2026-04-09 00:24:57.735478 | orchestrator | changed: [testbed-manager] 2026-04-09 00:24:57.735485 | orchestrator | 2026-04-09 00:24:57.735492 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-09 00:24:57.735500 | orchestrator | Thursday 09 April 2026 00:23:14 +0000 (0:00:00.883) 0:00:05.998 ******** 2026-04-09 00:24:57.735507 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-09 00:24:57.735515 | orchestrator | ok: [testbed-manager] 2026-04-09 00:24:57.735522 | orchestrator | 2026-04-09 00:24:57.735529 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-09 00:24:57.735536 | orchestrator | Thursday 09 April 2026 00:23:44 +0000 (0:00:30.759) 0:00:36.758 ******** 2026-04-09 00:24:57.735544 | orchestrator | changed: [testbed-manager] 2026-04-09 00:24:57.735551 | orchestrator | 2026-04-09 00:24:57.735558 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-09 00:24:57.735565 | orchestrator | Thursday 09 April 2026 00:23:56 +0000 (0:00:12.032) 0:00:48.791 ******** 2026-04-09 00:24:57.735572 | orchestrator | Pausing for 60 seconds 2026-04-09 00:24:57.735580 | orchestrator | changed: [testbed-manager] 2026-04-09 00:24:57.735587 | orchestrator | 2026-04-09 00:24:57.735594 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-09 00:24:57.735601 | orchestrator | Thursday 09 April 2026 00:24:57 +0000 (0:01:00.077) 0:01:48.868 ******** 2026-04-09 00:24:57.735608 | orchestrator | ok: [testbed-manager] 2026-04-09 00:24:57.735615 | orchestrator | 2026-04-09 00:24:57.735622 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-09 00:24:57.735651 | orchestrator | Thursday 09 April 2026 00:24:57 +0000 (0:00:00.055) 0:01:48.924 ******** 2026-04-09 00:24:57.735659 | orchestrator | changed: [testbed-manager] 2026-04-09 00:24:57.735667 | orchestrator | 2026-04-09 00:24:57.735674 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:24:57.735681 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:24:57.735688 | orchestrator | 2026-04-09 00:24:57.735695 | orchestrator | 2026-04-09 00:24:57.735702 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:24:57.735709 | orchestrator | Thursday 09 April 2026 00:24:57 +0000 (0:00:00.511) 0:01:49.436 ******** 2026-04-09 00:24:57.735716 | orchestrator | =============================================================================== 2026-04-09 00:24:57.735723 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-04-09 00:24:57.735732 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.76s 2026-04-09 00:24:57.735740 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.03s 2026-04-09 00:24:57.735748 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.26s 2026-04-09 00:24:57.735756 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2026-04-09 00:24:57.735765 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.02s 2026-04-09 00:24:57.735773 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-04-09 00:24:57.735781 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.51s 2026-04-09 00:24:57.735790 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-04-09 00:24:57.735798 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-09 00:24:57.735807 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-09 00:24:57.853262 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 00:24:57.853378 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-09 00:24:57.856777 | orchestrator | + set -e 2026-04-09 00:24:57.856823 | orchestrator | + NAMESPACE=kolla 2026-04-09 00:24:57.856845 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-09 00:24:57.861197 | orchestrator | ++ semver latest 9.0.0 2026-04-09 00:24:57.899939 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-09 00:24:57.900040 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 00:24:57.900271 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-09 00:25:09.022477 | orchestrator | 2026-04-09 00:25:09 | INFO  | Prepare task for execution of operator. 2026-04-09 00:25:09.102588 | orchestrator | 2026-04-09 00:25:09 | INFO  | Task 9bc32d4a-8e4c-4fa4-b728-96f96fe52b3c (operator) was prepared for execution. 2026-04-09 00:25:09.102668 | orchestrator | 2026-04-09 00:25:09 | INFO  | It takes a moment until task 9bc32d4a-8e4c-4fa4-b728-96f96fe52b3c (operator) has been started and output is visible here. 2026-04-09 00:25:24.380657 | orchestrator | 2026-04-09 00:25:24.380765 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-09 00:25:24.380782 | orchestrator | 2026-04-09 00:25:24.380795 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:25:24.380807 | orchestrator | Thursday 09 April 2026 00:25:12 +0000 (0:00:00.182) 0:00:00.182 ******** 2026-04-09 00:25:24.380819 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:24.380831 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:24.380842 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:24.380853 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:24.380864 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:24.380875 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:24.380889 | orchestrator | 2026-04-09 00:25:24.380901 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-09 00:25:24.380935 | orchestrator | Thursday 09 April 2026 00:25:15 +0000 (0:00:03.459) 0:00:03.641 ******** 2026-04-09 00:25:24.380946 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:24.380957 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:24.380968 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:24.380978 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:24.381040 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:24.381053 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:24.381075 | orchestrator | 2026-04-09 00:25:24.381087 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-09 00:25:24.381098 | orchestrator | 2026-04-09 00:25:24.381108 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-09 00:25:24.381120 | orchestrator | Thursday 09 April 2026 00:25:16 +0000 (0:00:00.799) 0:00:04.441 ******** 2026-04-09 00:25:24.381130 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:24.381141 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:24.381152 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:24.381162 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:24.381173 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:24.381186 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:24.381200 | orchestrator | 2026-04-09 00:25:24.381212 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-09 00:25:24.381247 | orchestrator | Thursday 09 April 2026 00:25:16 +0000 (0:00:00.169) 0:00:04.610 ******** 2026-04-09 00:25:24.381258 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:24.381269 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:24.381280 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:24.381290 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:24.381301 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:24.381312 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:24.381323 | orchestrator | 2026-04-09 00:25:24.381333 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-09 00:25:24.381345 | orchestrator | Thursday 09 April 2026 00:25:16 +0000 (0:00:00.160) 0:00:04.771 ******** 2026-04-09 00:25:24.381356 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:24.381367 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:24.381378 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:24.381389 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:24.381399 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:24.381410 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:24.381421 | orchestrator | 2026-04-09 00:25:24.381432 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-09 00:25:24.381443 | orchestrator | Thursday 09 April 2026 00:25:17 +0000 (0:00:00.659) 0:00:05.431 ******** 2026-04-09 00:25:24.381454 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:24.381464 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:24.381475 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:24.381486 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:24.381496 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:24.381507 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:24.381518 | orchestrator | 2026-04-09 00:25:24.381529 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-09 00:25:24.381540 | orchestrator | Thursday 09 April 2026 00:25:18 +0000 (0:00:00.890) 0:00:06.321 ******** 2026-04-09 00:25:24.381551 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-09 00:25:24.381562 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-09 00:25:24.381573 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-09 00:25:24.381584 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-09 00:25:24.381594 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-09 00:25:24.381605 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-09 00:25:24.381616 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-09 00:25:24.381627 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-09 00:25:24.381638 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-09 00:25:24.381657 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-09 00:25:24.381668 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-09 00:25:24.381678 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-09 00:25:24.381689 | orchestrator | 2026-04-09 00:25:24.381700 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-09 00:25:24.381711 | orchestrator | Thursday 09 April 2026 00:25:19 +0000 (0:00:01.192) 0:00:07.513 ******** 2026-04-09 00:25:24.381722 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:24.381733 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:24.381744 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:24.381754 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:24.381765 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:24.381776 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:24.381786 | orchestrator | 2026-04-09 00:25:24.381797 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-09 00:25:24.381809 | orchestrator | Thursday 09 April 2026 00:25:20 +0000 (0:00:01.351) 0:00:08.864 ******** 2026-04-09 00:25:24.381820 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:24.381831 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:24.381841 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:24.381852 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:24.381863 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:24.381893 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:25:24.381905 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:24.381916 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:24.381926 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:24.381937 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:24.381947 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:24.381958 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-09 00:25:24.381969 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:24.381979 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-09 00:25:24.382088 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-09 00:25:24.382102 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-09 00:25:24.382120 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:24.382131 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:24.382142 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:24.382153 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:24.382164 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:25:24.382175 | orchestrator | 2026-04-09 00:25:24.382186 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-09 00:25:24.382198 | orchestrator | Thursday 09 April 2026 00:25:22 +0000 (0:00:01.462) 0:00:10.327 ******** 2026-04-09 00:25:24.382209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:24.382220 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:24.382231 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:24.382242 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:24.382252 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:24.382263 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:24.382274 | orchestrator | 2026-04-09 00:25:24.382285 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-09 00:25:24.382304 | orchestrator | Thursday 09 April 2026 00:25:22 +0000 (0:00:00.137) 0:00:10.465 ******** 2026-04-09 00:25:24.382315 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:24.382326 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:24.382336 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:24.382347 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:24.382358 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:24.382369 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:24.382379 | orchestrator | 2026-04-09 00:25:24.382390 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-09 00:25:24.382401 | orchestrator | Thursday 09 April 2026 00:25:22 +0000 (0:00:00.165) 0:00:10.631 ******** 2026-04-09 00:25:24.382412 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:24.382423 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:24.382434 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:24.382444 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:24.382455 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:24.382466 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:24.382476 | orchestrator | 2026-04-09 00:25:24.382487 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-09 00:25:24.382498 | orchestrator | Thursday 09 April 2026 00:25:23 +0000 (0:00:00.512) 0:00:11.143 ******** 2026-04-09 00:25:24.382509 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:24.382520 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:24.382531 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:24.382541 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:24.382552 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:24.382563 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:24.382573 | orchestrator | 2026-04-09 00:25:24.382584 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-09 00:25:24.382595 | orchestrator | Thursday 09 April 2026 00:25:23 +0000 (0:00:00.154) 0:00:11.297 ******** 2026-04-09 00:25:24.382607 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:25:24.382617 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:25:24.382628 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-09 00:25:24.382639 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:24.382650 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:24.382661 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:25:24.382672 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:24.382683 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-09 00:25:24.382694 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:24.382704 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:24.382715 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:25:24.382726 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:24.382736 | orchestrator | 2026-04-09 00:25:24.382747 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-09 00:25:24.382758 | orchestrator | Thursday 09 April 2026 00:25:24 +0000 (0:00:00.725) 0:00:12.023 ******** 2026-04-09 00:25:24.382769 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:24.382780 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:24.382791 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:24.382801 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:24.382812 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:24.382823 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:24.382834 | orchestrator | 2026-04-09 00:25:24.382845 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-09 00:25:24.382855 | orchestrator | Thursday 09 April 2026 00:25:24 +0000 (0:00:00.137) 0:00:12.160 ******** 2026-04-09 00:25:24.382866 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:24.382877 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:24.382888 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:24.382899 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:24.382924 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:25.679333 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:25.679437 | orchestrator | 2026-04-09 00:25:25.679454 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-09 00:25:25.679469 | orchestrator | Thursday 09 April 2026 00:25:24 +0000 (0:00:00.156) 0:00:12.317 ******** 2026-04-09 00:25:25.679480 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:25.679491 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:25.679502 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:25.679513 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:25.679524 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:25.679534 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:25.679545 | orchestrator | 2026-04-09 00:25:25.679556 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-09 00:25:25.679567 | orchestrator | Thursday 09 April 2026 00:25:24 +0000 (0:00:00.135) 0:00:12.452 ******** 2026-04-09 00:25:25.679578 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:25.679588 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:25.679599 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:25.679609 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:25.679620 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:25.679631 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:25.679641 | orchestrator | 2026-04-09 00:25:25.679652 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-09 00:25:25.679663 | orchestrator | Thursday 09 April 2026 00:25:25 +0000 (0:00:00.763) 0:00:13.216 ******** 2026-04-09 00:25:25.679674 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:25.679685 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:25.679696 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:25.679706 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:25.679717 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:25.679727 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:25.679738 | orchestrator | 2026-04-09 00:25:25.679748 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:25:25.679783 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:25.679796 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:25.679807 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:25.679818 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:25.679829 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:25.679839 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:25:25.679850 | orchestrator | 2026-04-09 00:25:25.679861 | orchestrator | 2026-04-09 00:25:25.679875 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:25:25.679893 | orchestrator | Thursday 09 April 2026 00:25:25 +0000 (0:00:00.200) 0:00:13.417 ******** 2026-04-09 00:25:25.679912 | orchestrator | =============================================================================== 2026-04-09 00:25:25.679930 | orchestrator | Gathering Facts --------------------------------------------------------- 3.46s 2026-04-09 00:25:25.679948 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.46s 2026-04-09 00:25:25.679968 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.35s 2026-04-09 00:25:25.680045 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-04-09 00:25:25.680062 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.89s 2026-04-09 00:25:25.680074 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2026-04-09 00:25:25.680087 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.76s 2026-04-09 00:25:25.680099 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-04-09 00:25:25.680111 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.66s 2026-04-09 00:25:25.680125 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.51s 2026-04-09 00:25:25.680138 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2026-04-09 00:25:25.680150 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-04-09 00:25:25.680163 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-04-09 00:25:25.680177 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-04-09 00:25:25.680190 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-04-09 00:25:25.680202 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-04-09 00:25:25.680215 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-04-09 00:25:25.680228 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-04-09 00:25:25.680239 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-04-09 00:25:25.842957 | orchestrator | + osism apply --environment custom facts 2026-04-09 00:25:27.096574 | orchestrator | 2026-04-09 00:25:27 | INFO  | Trying to run play facts in environment custom 2026-04-09 00:25:37.270844 | orchestrator | 2026-04-09 00:25:37 | INFO  | Prepare task for execution of facts. 2026-04-09 00:25:37.346398 | orchestrator | 2026-04-09 00:25:37 | INFO  | Task 9ac7e636-06b0-41be-99c2-158f7832409a (facts) was prepared for execution. 2026-04-09 00:25:37.346496 | orchestrator | 2026-04-09 00:25:37 | INFO  | It takes a moment until task 9ac7e636-06b0-41be-99c2-158f7832409a (facts) has been started and output is visible here. 2026-04-09 00:26:21.888623 | orchestrator | 2026-04-09 00:26:21.888720 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-09 00:26:21.888733 | orchestrator | 2026-04-09 00:26:21.888742 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 00:26:21.888764 | orchestrator | Thursday 09 April 2026 00:25:40 +0000 (0:00:00.119) 0:00:00.119 ******** 2026-04-09 00:26:21.888773 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:21.888782 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:21.888791 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:21.888799 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:21.888807 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:21.888815 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:21.888822 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:21.888830 | orchestrator | 2026-04-09 00:26:21.888838 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-09 00:26:21.888846 | orchestrator | Thursday 09 April 2026 00:25:41 +0000 (0:00:01.333) 0:00:01.453 ******** 2026-04-09 00:26:21.888854 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:21.888862 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:21.888870 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:21.888878 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:21.888886 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:21.888894 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:21.888902 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:21.888910 | orchestrator | 2026-04-09 00:26:21.888946 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-09 00:26:21.888955 | orchestrator | 2026-04-09 00:26:21.888963 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 00:26:21.888971 | orchestrator | Thursday 09 April 2026 00:25:42 +0000 (0:00:01.182) 0:00:02.636 ******** 2026-04-09 00:26:21.889027 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:21.889036 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:21.889043 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:21.889051 | orchestrator | 2026-04-09 00:26:21.889059 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 00:26:21.889068 | orchestrator | Thursday 09 April 2026 00:25:43 +0000 (0:00:00.071) 0:00:02.708 ******** 2026-04-09 00:26:21.889075 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:21.889083 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:21.889091 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:21.889098 | orchestrator | 2026-04-09 00:26:21.889106 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 00:26:21.889114 | orchestrator | Thursday 09 April 2026 00:25:43 +0000 (0:00:00.180) 0:00:02.888 ******** 2026-04-09 00:26:21.889122 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:21.889129 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:21.889137 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:21.889145 | orchestrator | 2026-04-09 00:26:21.889152 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 00:26:21.889160 | orchestrator | Thursday 09 April 2026 00:25:43 +0000 (0:00:00.172) 0:00:03.060 ******** 2026-04-09 00:26:21.889171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:26:21.889186 | orchestrator | 2026-04-09 00:26:21.889201 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 00:26:21.889213 | orchestrator | Thursday 09 April 2026 00:25:43 +0000 (0:00:00.114) 0:00:03.175 ******** 2026-04-09 00:26:21.889226 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:21.889239 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:21.889252 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:21.889265 | orchestrator | 2026-04-09 00:26:21.889278 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 00:26:21.889290 | orchestrator | Thursday 09 April 2026 00:25:43 +0000 (0:00:00.496) 0:00:03.672 ******** 2026-04-09 00:26:21.889303 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:26:21.889318 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:26:21.889343 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:26:21.889360 | orchestrator | 2026-04-09 00:26:21.889373 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 00:26:21.889387 | orchestrator | Thursday 09 April 2026 00:25:44 +0000 (0:00:00.099) 0:00:03.771 ******** 2026-04-09 00:26:21.889400 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:21.889421 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:21.889442 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:21.889471 | orchestrator | 2026-04-09 00:26:21.889487 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 00:26:21.889501 | orchestrator | Thursday 09 April 2026 00:25:45 +0000 (0:00:01.026) 0:00:04.797 ******** 2026-04-09 00:26:21.889514 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:21.889527 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:21.889539 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:21.889551 | orchestrator | 2026-04-09 00:26:21.889565 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 00:26:21.889579 | orchestrator | Thursday 09 April 2026 00:25:45 +0000 (0:00:00.465) 0:00:05.262 ******** 2026-04-09 00:26:21.889592 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:21.889607 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:21.889619 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:21.889633 | orchestrator | 2026-04-09 00:26:21.889659 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 00:26:21.889673 | orchestrator | Thursday 09 April 2026 00:25:46 +0000 (0:00:01.104) 0:00:06.367 ******** 2026-04-09 00:26:21.889687 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:21.889701 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:21.889715 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:21.889727 | orchestrator | 2026-04-09 00:26:21.889741 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-09 00:26:21.889755 | orchestrator | Thursday 09 April 2026 00:26:03 +0000 (0:00:17.208) 0:00:23.575 ******** 2026-04-09 00:26:21.889769 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:26:21.889782 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:26:21.889794 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:26:21.889806 | orchestrator | 2026-04-09 00:26:21.889819 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-09 00:26:21.889854 | orchestrator | Thursday 09 April 2026 00:26:03 +0000 (0:00:00.083) 0:00:23.659 ******** 2026-04-09 00:26:21.889869 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:21.889883 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:21.889897 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:21.889911 | orchestrator | 2026-04-09 00:26:21.889924 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 00:26:21.889938 | orchestrator | Thursday 09 April 2026 00:26:12 +0000 (0:00:08.167) 0:00:31.826 ******** 2026-04-09 00:26:21.889952 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:21.889967 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:21.890005 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:21.890080 | orchestrator | 2026-04-09 00:26:21.890099 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-09 00:26:21.890115 | orchestrator | Thursday 09 April 2026 00:26:12 +0000 (0:00:00.544) 0:00:32.370 ******** 2026-04-09 00:26:21.890130 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-09 00:26:21.890145 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-09 00:26:21.890161 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-09 00:26:21.890178 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-09 00:26:21.890195 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-09 00:26:21.890211 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-09 00:26:21.890226 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-09 00:26:21.890241 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-09 00:26:21.890256 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-09 00:26:21.890272 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-09 00:26:21.890287 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-09 00:26:21.890302 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-09 00:26:21.890316 | orchestrator | 2026-04-09 00:26:21.890332 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 00:26:21.890347 | orchestrator | Thursday 09 April 2026 00:26:16 +0000 (0:00:03.853) 0:00:36.224 ******** 2026-04-09 00:26:21.890363 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:21.890378 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:21.890394 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:21.890409 | orchestrator | 2026-04-09 00:26:21.890426 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:26:21.890458 | orchestrator | 2026-04-09 00:26:21.890472 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:26:21.890534 | orchestrator | Thursday 09 April 2026 00:26:17 +0000 (0:00:01.399) 0:00:37.623 ******** 2026-04-09 00:26:21.890550 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:21.890575 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:21.890588 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:21.890603 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:21.890616 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:21.890631 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:21.890645 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:21.890659 | orchestrator | 2026-04-09 00:26:21.890673 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:26:21.890689 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:26:21.890704 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:26:21.890720 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:26:21.890736 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:26:21.890751 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:26:21.890765 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:26:21.890779 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:26:21.890792 | orchestrator | 2026-04-09 00:26:21.890804 | orchestrator | 2026-04-09 00:26:21.890818 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:26:21.890832 | orchestrator | Thursday 09 April 2026 00:26:21 +0000 (0:00:03.951) 0:00:41.575 ******** 2026-04-09 00:26:21.890846 | orchestrator | =============================================================================== 2026-04-09 00:26:21.890860 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.21s 2026-04-09 00:26:21.890874 | orchestrator | Install required packages (Debian) -------------------------------------- 8.17s 2026-04-09 00:26:21.890888 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.95s 2026-04-09 00:26:21.890902 | orchestrator | Copy fact files --------------------------------------------------------- 3.85s 2026-04-09 00:26:21.890916 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.40s 2026-04-09 00:26:21.890929 | orchestrator | Create custom facts directory ------------------------------------------- 1.33s 2026-04-09 00:26:21.890956 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-04-09 00:26:22.083192 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-04-09 00:26:22.083327 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-04-09 00:26:22.083356 | orchestrator | Create custom facts directory ------------------------------------------- 0.54s 2026-04-09 00:26:22.083366 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.50s 2026-04-09 00:26:22.083375 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-04-09 00:26:22.083384 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-04-09 00:26:22.083393 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2026-04-09 00:26:22.083403 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2026-04-09 00:26:22.083420 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-04-09 00:26:22.083434 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-04-09 00:26:22.083448 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.07s 2026-04-09 00:26:22.294775 | orchestrator | + osism apply bootstrap 2026-04-09 00:26:33.722359 | orchestrator | 2026-04-09 00:26:33 | INFO  | Prepare task for execution of bootstrap. 2026-04-09 00:26:33.823199 | orchestrator | 2026-04-09 00:26:33 | INFO  | Task b0029871-d22a-49fc-9d55-b1783a395374 (bootstrap) was prepared for execution. 2026-04-09 00:26:33.823282 | orchestrator | 2026-04-09 00:26:33 | INFO  | It takes a moment until task b0029871-d22a-49fc-9d55-b1783a395374 (bootstrap) has been started and output is visible here. 2026-04-09 00:26:50.243813 | orchestrator | 2026-04-09 00:26:50.244048 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-09 00:26:50.244083 | orchestrator | 2026-04-09 00:26:50.244098 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-09 00:26:50.244109 | orchestrator | Thursday 09 April 2026 00:26:37 +0000 (0:00:00.196) 0:00:00.196 ******** 2026-04-09 00:26:50.244121 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:50.244133 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:50.244144 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:50.244155 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:50.244165 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:50.244177 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:50.244190 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:50.244203 | orchestrator | 2026-04-09 00:26:50.244216 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:26:50.244228 | orchestrator | 2026-04-09 00:26:50.244241 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:26:50.244254 | orchestrator | Thursday 09 April 2026 00:26:37 +0000 (0:00:00.350) 0:00:00.547 ******** 2026-04-09 00:26:50.244266 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:50.244280 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:50.244294 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:50.244307 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:50.244319 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:50.244331 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:50.244342 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:50.244355 | orchestrator | 2026-04-09 00:26:50.244368 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-09 00:26:50.244382 | orchestrator | 2026-04-09 00:26:50.244395 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:26:50.244407 | orchestrator | Thursday 09 April 2026 00:26:43 +0000 (0:00:05.686) 0:00:06.234 ******** 2026-04-09 00:26:50.244420 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-09 00:26:50.244433 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 00:26:50.244445 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-09 00:26:50.244458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-09 00:26:50.244470 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-09 00:26:50.244484 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:26:50.244497 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-09 00:26:50.244509 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-09 00:26:50.244522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:26:50.244534 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 00:26:50.244545 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:26:50.244556 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-09 00:26:50.244566 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 00:26:50.244577 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-09 00:26:50.244587 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 00:26:50.244598 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-09 00:26:50.244634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 00:26:50.244646 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 00:26:50.244657 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 00:26:50.244668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-09 00:26:50.244678 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:26:50.244689 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 00:26:50.244699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 00:26:50.244710 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 00:26:50.244720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:26:50.244731 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 00:26:50.244741 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:26:50.244753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:26:50.244763 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 00:26:50.244774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:26:50.244784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 00:26:50.244795 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:26:50.244806 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-09 00:26:50.244816 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 00:26:50.244827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:26:50.244837 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-09 00:26:50.244848 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 00:26:50.244858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:26:50.244869 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 00:26:50.244879 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 00:26:50.244890 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 00:26:50.244901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:26:50.244911 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 00:26:50.244922 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:26:50.244933 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:26:50.244943 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 00:26:50.244954 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 00:26:50.245016 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 00:26:50.245036 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 00:26:50.245048 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 00:26:50.245059 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 00:26:50.245069 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 00:26:50.245080 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:26:50.245090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 00:26:50.245100 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 00:26:50.245111 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:26:50.245121 | orchestrator | 2026-04-09 00:26:50.245132 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-09 00:26:50.245143 | orchestrator | 2026-04-09 00:26:50.245153 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-09 00:26:50.245164 | orchestrator | Thursday 09 April 2026 00:26:43 +0000 (0:00:00.479) 0:00:06.713 ******** 2026-04-09 00:26:50.245175 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:50.245186 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:50.245205 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:50.245216 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:50.245227 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:50.245237 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:50.245247 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:50.245258 | orchestrator | 2026-04-09 00:26:50.245268 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-09 00:26:50.245279 | orchestrator | Thursday 09 April 2026 00:26:44 +0000 (0:00:01.214) 0:00:07.928 ******** 2026-04-09 00:26:50.245290 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:50.245300 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:50.245311 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:50.245321 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:50.245332 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:50.245342 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:50.245353 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:50.245363 | orchestrator | 2026-04-09 00:26:50.245374 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-09 00:26:50.245385 | orchestrator | Thursday 09 April 2026 00:26:46 +0000 (0:00:01.179) 0:00:09.107 ******** 2026-04-09 00:26:50.245397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:26:50.245410 | orchestrator | 2026-04-09 00:26:50.245421 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-09 00:26:50.245432 | orchestrator | Thursday 09 April 2026 00:26:46 +0000 (0:00:00.267) 0:00:09.375 ******** 2026-04-09 00:26:50.245442 | orchestrator | changed: [testbed-manager] 2026-04-09 00:26:50.245453 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:50.245464 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:50.245475 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:50.245485 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:50.245496 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:50.245506 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:50.245517 | orchestrator | 2026-04-09 00:26:50.245528 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-09 00:26:50.245538 | orchestrator | Thursday 09 April 2026 00:26:47 +0000 (0:00:01.460) 0:00:10.836 ******** 2026-04-09 00:26:50.245549 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:26:50.245562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:26:50.245575 | orchestrator | 2026-04-09 00:26:50.245585 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-09 00:26:50.245615 | orchestrator | Thursday 09 April 2026 00:26:48 +0000 (0:00:00.269) 0:00:11.105 ******** 2026-04-09 00:26:50.245626 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:50.245637 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:50.245648 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:50.245658 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:50.245674 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:50.245685 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:50.245696 | orchestrator | 2026-04-09 00:26:50.245706 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-09 00:26:50.245717 | orchestrator | Thursday 09 April 2026 00:26:49 +0000 (0:00:00.985) 0:00:12.090 ******** 2026-04-09 00:26:50.245728 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:26:50.245738 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:50.245749 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:50.245759 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:50.245770 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:50.245780 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:50.245797 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:50.245808 | orchestrator | 2026-04-09 00:26:50.245818 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-09 00:26:50.245829 | orchestrator | Thursday 09 April 2026 00:26:49 +0000 (0:00:00.608) 0:00:12.698 ******** 2026-04-09 00:26:50.245840 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:26:50.245851 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:26:50.245861 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:26:50.245872 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:26:50.245882 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:26:50.245892 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:26:50.245903 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:50.245914 | orchestrator | 2026-04-09 00:26:50.245925 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-09 00:26:50.245936 | orchestrator | Thursday 09 April 2026 00:26:50 +0000 (0:00:00.413) 0:00:13.112 ******** 2026-04-09 00:26:50.245947 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:26:50.245958 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:26:50.246000 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:02.252179 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:02.252774 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:02.252806 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:02.252813 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:02.252820 | orchestrator | 2026-04-09 00:27:02.252828 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-09 00:27:02.252837 | orchestrator | Thursday 09 April 2026 00:26:50 +0000 (0:00:00.211) 0:00:13.323 ******** 2026-04-09 00:27:02.252847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:02.252868 | orchestrator | 2026-04-09 00:27:02.252874 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-09 00:27:02.252882 | orchestrator | Thursday 09 April 2026 00:26:50 +0000 (0:00:00.307) 0:00:13.631 ******** 2026-04-09 00:27:02.252898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:02.252905 | orchestrator | 2026-04-09 00:27:02.252912 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-09 00:27:02.252919 | orchestrator | Thursday 09 April 2026 00:26:50 +0000 (0:00:00.288) 0:00:13.919 ******** 2026-04-09 00:27:02.252926 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.252933 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:02.252940 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:02.252946 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:02.252954 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:02.252983 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:02.252990 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:02.252997 | orchestrator | 2026-04-09 00:27:02.253004 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-09 00:27:02.253012 | orchestrator | Thursday 09 April 2026 00:26:52 +0000 (0:00:01.434) 0:00:15.353 ******** 2026-04-09 00:27:02.253019 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:02.253024 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:02.253029 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:02.253034 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:02.253038 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:02.253043 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:02.253048 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:02.253052 | orchestrator | 2026-04-09 00:27:02.253057 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-09 00:27:02.253083 | orchestrator | Thursday 09 April 2026 00:26:52 +0000 (0:00:00.206) 0:00:15.560 ******** 2026-04-09 00:27:02.253088 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253091 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:02.253095 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:02.253099 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:02.253103 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:02.253106 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:02.253110 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:02.253114 | orchestrator | 2026-04-09 00:27:02.253118 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-09 00:27:02.253123 | orchestrator | Thursday 09 April 2026 00:26:53 +0000 (0:00:00.581) 0:00:16.142 ******** 2026-04-09 00:27:02.253129 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:02.253135 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:02.253140 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:02.253145 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:02.253151 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:02.253157 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:02.253163 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:02.253169 | orchestrator | 2026-04-09 00:27:02.253176 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-09 00:27:02.253183 | orchestrator | Thursday 09 April 2026 00:26:53 +0000 (0:00:00.217) 0:00:16.360 ******** 2026-04-09 00:27:02.253190 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253195 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:02.253205 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:02.253209 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:02.253213 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:02.253216 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:02.253220 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:02.253224 | orchestrator | 2026-04-09 00:27:02.253228 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-09 00:27:02.253232 | orchestrator | Thursday 09 April 2026 00:26:53 +0000 (0:00:00.543) 0:00:16.903 ******** 2026-04-09 00:27:02.253235 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253239 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:02.253244 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:02.253251 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:02.253257 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:02.253263 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:02.253269 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:02.253275 | orchestrator | 2026-04-09 00:27:02.253281 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-09 00:27:02.253287 | orchestrator | Thursday 09 April 2026 00:26:55 +0000 (0:00:01.145) 0:00:18.049 ******** 2026-04-09 00:27:02.253294 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253300 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:02.253307 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:02.253314 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:02.253320 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:02.253326 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:02.253332 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:02.253339 | orchestrator | 2026-04-09 00:27:02.253345 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-09 00:27:02.253352 | orchestrator | Thursday 09 April 2026 00:26:56 +0000 (0:00:01.018) 0:00:19.067 ******** 2026-04-09 00:27:02.253373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:02.253377 | orchestrator | 2026-04-09 00:27:02.253381 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-09 00:27:02.253384 | orchestrator | Thursday 09 April 2026 00:26:56 +0000 (0:00:00.298) 0:00:19.366 ******** 2026-04-09 00:27:02.253393 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:02.253397 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:02.253400 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:02.253404 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:02.253408 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:02.253411 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:02.253415 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:02.253419 | orchestrator | 2026-04-09 00:27:02.253422 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 00:27:02.253426 | orchestrator | Thursday 09 April 2026 00:26:57 +0000 (0:00:01.232) 0:00:20.599 ******** 2026-04-09 00:27:02.253430 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253433 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:02.253437 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:02.253441 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:02.253444 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:02.253448 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:02.253452 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:02.253455 | orchestrator | 2026-04-09 00:27:02.253459 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 00:27:02.253463 | orchestrator | Thursday 09 April 2026 00:26:57 +0000 (0:00:00.238) 0:00:20.838 ******** 2026-04-09 00:27:02.253467 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253470 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:02.253474 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:02.253478 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:02.253481 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:02.253485 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:02.253489 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:02.253493 | orchestrator | 2026-04-09 00:27:02.253496 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 00:27:02.253500 | orchestrator | Thursday 09 April 2026 00:26:58 +0000 (0:00:00.229) 0:00:21.067 ******** 2026-04-09 00:27:02.253504 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253508 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:02.253511 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:02.253516 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:02.253522 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:02.253528 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:02.253533 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:02.253539 | orchestrator | 2026-04-09 00:27:02.253544 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 00:27:02.253549 | orchestrator | Thursday 09 April 2026 00:26:58 +0000 (0:00:00.223) 0:00:21.291 ******** 2026-04-09 00:27:02.253555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:02.253562 | orchestrator | 2026-04-09 00:27:02.253568 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 00:27:02.253573 | orchestrator | Thursday 09 April 2026 00:26:58 +0000 (0:00:00.266) 0:00:21.558 ******** 2026-04-09 00:27:02.253579 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253584 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:02.253589 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:02.253595 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:02.253600 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:02.253606 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:02.253611 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:02.253616 | orchestrator | 2026-04-09 00:27:02.253622 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 00:27:02.253627 | orchestrator | Thursday 09 April 2026 00:26:59 +0000 (0:00:00.675) 0:00:22.233 ******** 2026-04-09 00:27:02.253632 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:02.253638 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:02.253648 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:02.253654 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:02.253661 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:02.253667 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:02.253673 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:02.253678 | orchestrator | 2026-04-09 00:27:02.253684 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 00:27:02.253689 | orchestrator | Thursday 09 April 2026 00:26:59 +0000 (0:00:00.223) 0:00:22.457 ******** 2026-04-09 00:27:02.253694 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253700 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:02.253707 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:02.253714 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:02.253718 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:02.253722 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:02.253726 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:02.253729 | orchestrator | 2026-04-09 00:27:02.253733 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 00:27:02.253737 | orchestrator | Thursday 09 April 2026 00:27:00 +0000 (0:00:01.034) 0:00:23.491 ******** 2026-04-09 00:27:02.253741 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253745 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:02.253749 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:02.253752 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:02.253756 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:02.253760 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:02.253764 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:02.253767 | orchestrator | 2026-04-09 00:27:02.253771 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 00:27:02.253776 | orchestrator | Thursday 09 April 2026 00:27:01 +0000 (0:00:00.631) 0:00:24.122 ******** 2026-04-09 00:27:02.253781 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:02.253787 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:02.253793 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:02.253799 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:02.253810 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:43.307210 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.307322 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:43.307335 | orchestrator | 2026-04-09 00:27:43.307342 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 00:27:43.307351 | orchestrator | Thursday 09 April 2026 00:27:02 +0000 (0:00:01.180) 0:00:25.303 ******** 2026-04-09 00:27:43.307357 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.307364 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.307370 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.307376 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:43.307382 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:43.307389 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:43.307395 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:43.307401 | orchestrator | 2026-04-09 00:27:43.307408 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-09 00:27:43.307414 | orchestrator | Thursday 09 April 2026 00:27:19 +0000 (0:00:17.622) 0:00:42.925 ******** 2026-04-09 00:27:43.307421 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:43.307427 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:43.307433 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:43.307439 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:43.307446 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.307452 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.307458 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.307464 | orchestrator | 2026-04-09 00:27:43.307470 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-09 00:27:43.307476 | orchestrator | Thursday 09 April 2026 00:27:20 +0000 (0:00:00.238) 0:00:43.163 ******** 2026-04-09 00:27:43.307482 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:43.307529 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:43.307536 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:43.307542 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:43.307548 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.307554 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.307560 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.307566 | orchestrator | 2026-04-09 00:27:43.307572 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-09 00:27:43.307578 | orchestrator | Thursday 09 April 2026 00:27:20 +0000 (0:00:00.218) 0:00:43.382 ******** 2026-04-09 00:27:43.307584 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:43.307590 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:43.307596 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:43.307602 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:43.307608 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.307614 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.307620 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.307626 | orchestrator | 2026-04-09 00:27:43.307632 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-09 00:27:43.307638 | orchestrator | Thursday 09 April 2026 00:27:20 +0000 (0:00:00.221) 0:00:43.604 ******** 2026-04-09 00:27:43.307647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:43.307656 | orchestrator | 2026-04-09 00:27:43.307679 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-09 00:27:43.307685 | orchestrator | Thursday 09 April 2026 00:27:20 +0000 (0:00:00.283) 0:00:43.888 ******** 2026-04-09 00:27:43.307692 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:43.307698 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:43.307704 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:43.307709 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.307715 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:43.307721 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.307727 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.307733 | orchestrator | 2026-04-09 00:27:43.307739 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-09 00:27:43.307745 | orchestrator | Thursday 09 April 2026 00:27:22 +0000 (0:00:01.817) 0:00:45.705 ******** 2026-04-09 00:27:43.307751 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:43.307757 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:43.307763 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:43.307769 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:43.307775 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:43.307781 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:43.307791 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:43.307797 | orchestrator | 2026-04-09 00:27:43.307803 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-09 00:27:43.307810 | orchestrator | Thursday 09 April 2026 00:27:23 +0000 (0:00:01.122) 0:00:46.828 ******** 2026-04-09 00:27:43.307816 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:43.307822 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:43.307828 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.307834 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:43.307840 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:43.307846 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.307852 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.307858 | orchestrator | 2026-04-09 00:27:43.307864 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-09 00:27:43.307870 | orchestrator | Thursday 09 April 2026 00:27:24 +0000 (0:00:00.847) 0:00:47.675 ******** 2026-04-09 00:27:43.307877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:43.307894 | orchestrator | 2026-04-09 00:27:43.307905 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-09 00:27:43.307915 | orchestrator | Thursday 09 April 2026 00:27:24 +0000 (0:00:00.286) 0:00:47.962 ******** 2026-04-09 00:27:43.307926 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:43.307936 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:43.307990 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:43.308001 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:43.308011 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:43.308021 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:43.308029 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:43.308035 | orchestrator | 2026-04-09 00:27:43.308056 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-09 00:27:43.308063 | orchestrator | Thursday 09 April 2026 00:27:26 +0000 (0:00:01.121) 0:00:49.084 ******** 2026-04-09 00:27:43.308069 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:27:43.308075 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:27:43.308081 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:27:43.308087 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:27:43.308093 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:27:43.308099 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:27:43.308105 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:27:43.308111 | orchestrator | 2026-04-09 00:27:43.308117 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-09 00:27:43.308123 | orchestrator | Thursday 09 April 2026 00:27:26 +0000 (0:00:00.199) 0:00:49.283 ******** 2026-04-09 00:27:43.308129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:43.308136 | orchestrator | 2026-04-09 00:27:43.308142 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-09 00:27:43.308148 | orchestrator | Thursday 09 April 2026 00:27:26 +0000 (0:00:00.286) 0:00:49.570 ******** 2026-04-09 00:27:43.308154 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:43.308160 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.308166 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:43.308172 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.308178 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:43.308184 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.308190 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:43.308196 | orchestrator | 2026-04-09 00:27:43.308202 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-09 00:27:43.308209 | orchestrator | Thursday 09 April 2026 00:27:28 +0000 (0:00:01.769) 0:00:51.340 ******** 2026-04-09 00:27:43.308215 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:43.308221 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:43.308227 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:43.308233 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:43.308239 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:43.308245 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:43.308251 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:43.308257 | orchestrator | 2026-04-09 00:27:43.308263 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-09 00:27:43.308269 | orchestrator | Thursday 09 April 2026 00:27:29 +0000 (0:00:01.150) 0:00:52.490 ******** 2026-04-09 00:27:43.308275 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:27:43.308281 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:27:43.308287 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:27:43.308293 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:27:43.308299 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:27:43.308305 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:27:43.308318 | orchestrator | changed: [testbed-manager] 2026-04-09 00:27:43.308324 | orchestrator | 2026-04-09 00:27:43.308330 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-09 00:27:43.308336 | orchestrator | Thursday 09 April 2026 00:27:40 +0000 (0:00:11.218) 0:01:03.709 ******** 2026-04-09 00:27:43.308342 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:43.308348 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:43.308355 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.308360 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:43.308367 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.308372 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:43.308378 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.308384 | orchestrator | 2026-04-09 00:27:43.308390 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-09 00:27:43.308397 | orchestrator | Thursday 09 April 2026 00:27:41 +0000 (0:00:00.934) 0:01:04.643 ******** 2026-04-09 00:27:43.308403 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:43.308409 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:43.308415 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:43.308421 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:43.308427 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.308433 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.308439 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.308445 | orchestrator | 2026-04-09 00:27:43.308455 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-09 00:27:43.308461 | orchestrator | Thursday 09 April 2026 00:27:42 +0000 (0:00:00.916) 0:01:05.560 ******** 2026-04-09 00:27:43.308467 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:43.308473 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:43.308479 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:43.308485 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:43.308491 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.308497 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.308503 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.308509 | orchestrator | 2026-04-09 00:27:43.308515 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-09 00:27:43.308522 | orchestrator | Thursday 09 April 2026 00:27:42 +0000 (0:00:00.218) 0:01:05.778 ******** 2026-04-09 00:27:43.308528 | orchestrator | ok: [testbed-manager] 2026-04-09 00:27:43.308534 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:27:43.308540 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:27:43.308546 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:27:43.308552 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:27:43.308558 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:27:43.308564 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:27:43.308570 | orchestrator | 2026-04-09 00:27:43.308576 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-09 00:27:43.308582 | orchestrator | Thursday 09 April 2026 00:27:43 +0000 (0:00:00.224) 0:01:06.003 ******** 2026-04-09 00:27:43.308589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:27:43.308595 | orchestrator | 2026-04-09 00:27:43.308605 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-09 00:29:51.505068 | orchestrator | Thursday 09 April 2026 00:27:43 +0000 (0:00:00.275) 0:01:06.279 ******** 2026-04-09 00:29:51.505155 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:51.505164 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:51.505170 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:51.505175 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:51.505180 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:51.505185 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:51.505191 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:51.505196 | orchestrator | 2026-04-09 00:29:51.505202 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-09 00:29:51.505225 | orchestrator | Thursday 09 April 2026 00:27:45 +0000 (0:00:01.799) 0:01:08.078 ******** 2026-04-09 00:29:51.505231 | orchestrator | changed: [testbed-manager] 2026-04-09 00:29:51.505236 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:29:51.505241 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:29:51.505246 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:29:51.505251 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:29:51.505256 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:29:51.505261 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:29:51.505266 | orchestrator | 2026-04-09 00:29:51.505271 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-09 00:29:51.505277 | orchestrator | Thursday 09 April 2026 00:27:45 +0000 (0:00:00.675) 0:01:08.754 ******** 2026-04-09 00:29:51.505282 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:51.505287 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:51.505292 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:51.505296 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:51.505301 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:51.505306 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:51.505311 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:51.505316 | orchestrator | 2026-04-09 00:29:51.505320 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-09 00:29:51.505325 | orchestrator | Thursday 09 April 2026 00:27:46 +0000 (0:00:00.281) 0:01:09.035 ******** 2026-04-09 00:29:51.505330 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:51.505335 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:51.505340 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:51.505344 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:51.505349 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:51.505354 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:51.505359 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:51.505363 | orchestrator | 2026-04-09 00:29:51.505368 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-09 00:29:51.505373 | orchestrator | Thursday 09 April 2026 00:27:47 +0000 (0:00:01.361) 0:01:10.396 ******** 2026-04-09 00:29:51.505378 | orchestrator | changed: [testbed-manager] 2026-04-09 00:29:51.505383 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:29:51.505388 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:29:51.505392 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:29:51.505397 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:29:51.505402 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:29:51.505407 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:29:51.505412 | orchestrator | 2026-04-09 00:29:51.505416 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-09 00:29:51.505421 | orchestrator | Thursday 09 April 2026 00:27:49 +0000 (0:00:02.203) 0:01:12.600 ******** 2026-04-09 00:29:51.505426 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:51.505431 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:51.505436 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:51.505441 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:51.505446 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:51.505451 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:51.505455 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:51.505460 | orchestrator | 2026-04-09 00:29:51.505465 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-09 00:29:51.505470 | orchestrator | Thursday 09 April 2026 00:27:52 +0000 (0:00:02.934) 0:01:15.535 ******** 2026-04-09 00:29:51.505475 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:51.505480 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:51.505484 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:51.505489 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:51.505494 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:51.505499 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:51.505503 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:51.505508 | orchestrator | 2026-04-09 00:29:51.505513 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-09 00:29:51.505533 | orchestrator | Thursday 09 April 2026 00:28:26 +0000 (0:00:34.427) 0:01:49.962 ******** 2026-04-09 00:29:51.505538 | orchestrator | changed: [testbed-manager] 2026-04-09 00:29:51.505543 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:29:51.505547 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:29:51.505552 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:29:51.505557 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:29:51.505562 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:29:51.505567 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:29:51.505571 | orchestrator | 2026-04-09 00:29:51.505576 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-09 00:29:51.505581 | orchestrator | Thursday 09 April 2026 00:29:37 +0000 (0:01:10.283) 0:03:00.246 ******** 2026-04-09 00:29:51.505586 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:51.505591 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:51.505596 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:51.505600 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:51.505605 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:51.505610 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:51.505618 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:51.505626 | orchestrator | 2026-04-09 00:29:51.505634 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-09 00:29:51.505644 | orchestrator | Thursday 09 April 2026 00:29:39 +0000 (0:00:01.964) 0:03:02.210 ******** 2026-04-09 00:29:51.505652 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:51.505661 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:51.505669 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:51.505677 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:51.505685 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:51.505692 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:51.505700 | orchestrator | changed: [testbed-manager] 2026-04-09 00:29:51.505708 | orchestrator | 2026-04-09 00:29:51.505716 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-09 00:29:51.505724 | orchestrator | Thursday 09 April 2026 00:29:50 +0000 (0:00:11.250) 0:03:13.461 ******** 2026-04-09 00:29:51.505756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-09 00:29:51.505774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-09 00:29:51.505786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-09 00:29:51.505797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-09 00:29:51.505814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-09 00:29:51.505824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-09 00:29:51.505836 | orchestrator | 2026-04-09 00:29:51.505846 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-09 00:29:51.505854 | orchestrator | Thursday 09 April 2026 00:29:50 +0000 (0:00:00.332) 0:03:13.794 ******** 2026-04-09 00:29:51.505863 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:29:51.505872 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:29:51.505881 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:29:51.505890 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:29:51.505897 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:29:51.505902 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:29:51.505932 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:29:51.505938 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:29:51.505943 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:29:51.505948 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:29:51.505953 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:29:51.505957 | orchestrator | 2026-04-09 00:29:51.505962 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-09 00:29:51.505967 | orchestrator | Thursday 09 April 2026 00:29:51 +0000 (0:00:00.628) 0:03:14.422 ******** 2026-04-09 00:29:51.505972 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:29:51.505978 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:29:51.505983 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:29:51.505987 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:29:51.505992 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:29:51.506002 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:29:59.633567 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:29:59.633678 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:29:59.633700 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:29:59.633710 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:29:59.633721 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:29:59.633731 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:29:59.633740 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:29:59.633748 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:29:59.633778 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:29:59.633787 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:29:59.633796 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:29:59.633805 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:29:59.633813 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:29:59.633822 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:29:59.633831 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:29:59.633839 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:29:59.633848 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:29:59.633856 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:29:59.633864 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:29:59.633873 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:29:59.633881 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:29:59.633890 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:29:59.633941 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:29:59.633951 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:29:59.633959 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:29:59.633968 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:29:59.633976 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:29:59.633985 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:29:59.633994 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:29:59.634094 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:29:59.634107 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:29:59.634118 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:29:59.634128 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:29:59.634138 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:29:59.634149 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:29:59.634159 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:29:59.634169 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:29:59.634180 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:29:59.634190 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 00:29:59.634199 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 00:29:59.634209 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 00:29:59.634228 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 00:29:59.634239 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 00:29:59.634265 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 00:29:59.634275 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 00:29:59.634285 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 00:29:59.634295 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 00:29:59.634305 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 00:29:59.634315 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 00:29:59.634325 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 00:29:59.634336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 00:29:59.634347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 00:29:59.634357 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 00:29:59.634367 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 00:29:59.634377 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 00:29:59.634386 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 00:29:59.634397 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 00:29:59.634407 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 00:29:59.634417 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 00:29:59.634427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 00:29:59.634437 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 00:29:59.634448 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 00:29:59.634458 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 00:29:59.634468 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 00:29:59.634478 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 00:29:59.634487 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 00:29:59.634495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 00:29:59.634504 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 00:29:59.634512 | orchestrator | 2026-04-09 00:29:59.634522 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-09 00:29:59.634531 | orchestrator | Thursday 09 April 2026 00:29:58 +0000 (0:00:06.954) 0:03:21.377 ******** 2026-04-09 00:29:59.634539 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:29:59.634548 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:29:59.634556 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:29:59.634570 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:29:59.634585 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:29:59.634594 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:29:59.634602 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:29:59.634611 | orchestrator | 2026-04-09 00:29:59.634619 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-09 00:29:59.634628 | orchestrator | Thursday 09 April 2026 00:29:59 +0000 (0:00:00.711) 0:03:22.089 ******** 2026-04-09 00:29:59.634637 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:29:59.634645 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:29:59.634654 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:29:59.634663 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:29:59.634671 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:29:59.634680 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:29:59.634688 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:29:59.634697 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:29:59.634706 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:29:59.634714 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:29:59.634728 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:30:12.972585 | orchestrator | 2026-04-09 00:30:12.972714 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-09 00:30:12.972737 | orchestrator | Thursday 09 April 2026 00:29:59 +0000 (0:00:00.565) 0:03:22.655 ******** 2026-04-09 00:30:12.972755 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:12.972773 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:12.972793 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:12.972810 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:12.972828 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:12.972845 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:12.972863 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:30:12.972940 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:12.972960 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:30:12.972978 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:30:12.972997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:30:12.973014 | orchestrator | 2026-04-09 00:30:12.973033 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-09 00:30:12.973051 | orchestrator | Thursday 09 April 2026 00:30:01 +0000 (0:00:01.535) 0:03:24.190 ******** 2026-04-09 00:30:12.973068 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:30:12.973086 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:30:12.973105 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:12.973123 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:30:12.973142 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:12.973197 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:30:12.973219 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:12.973241 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:12.973264 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 00:30:12.973286 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 00:30:12.973308 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 00:30:12.973331 | orchestrator | 2026-04-09 00:30:12.973352 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-09 00:30:12.973375 | orchestrator | Thursday 09 April 2026 00:30:01 +0000 (0:00:00.701) 0:03:24.892 ******** 2026-04-09 00:30:12.973397 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:12.973417 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:12.973441 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:12.973461 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:12.973481 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:12.973501 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:12.973521 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:12.973541 | orchestrator | 2026-04-09 00:30:12.973561 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-09 00:30:12.973582 | orchestrator | Thursday 09 April 2026 00:30:02 +0000 (0:00:00.358) 0:03:25.250 ******** 2026-04-09 00:30:12.973620 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:12.973641 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:12.973662 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:12.973681 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:12.973701 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:12.973721 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:12.973755 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:12.973776 | orchestrator | 2026-04-09 00:30:12.973796 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-09 00:30:12.973816 | orchestrator | Thursday 09 April 2026 00:30:07 +0000 (0:00:05.473) 0:03:30.724 ******** 2026-04-09 00:30:12.973833 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-09 00:30:12.973850 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-09 00:30:12.973867 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:12.973945 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-09 00:30:12.973965 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:12.973984 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:12.974000 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-09 00:30:12.974098 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-09 00:30:12.974124 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:12.974141 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-09 00:30:12.974159 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:12.974179 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:12.974197 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-09 00:30:12.974215 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:12.974233 | orchestrator | 2026-04-09 00:30:12.974252 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-09 00:30:12.974270 | orchestrator | Thursday 09 April 2026 00:30:08 +0000 (0:00:00.267) 0:03:30.991 ******** 2026-04-09 00:30:12.974288 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-09 00:30:12.974306 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-09 00:30:12.974326 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-09 00:30:12.974379 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-09 00:30:12.974399 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-09 00:30:12.974418 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-09 00:30:12.974457 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-09 00:30:12.974476 | orchestrator | 2026-04-09 00:30:12.974494 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-09 00:30:12.974511 | orchestrator | Thursday 09 April 2026 00:30:09 +0000 (0:00:01.036) 0:03:32.028 ******** 2026-04-09 00:30:12.974532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:30:12.974554 | orchestrator | 2026-04-09 00:30:12.974570 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-09 00:30:12.974586 | orchestrator | Thursday 09 April 2026 00:30:09 +0000 (0:00:00.344) 0:03:32.372 ******** 2026-04-09 00:30:12.974604 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:12.974623 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:12.974642 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:12.974661 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:12.974679 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:12.974698 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:12.974716 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:12.974734 | orchestrator | 2026-04-09 00:30:12.974752 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-09 00:30:12.974769 | orchestrator | Thursday 09 April 2026 00:30:10 +0000 (0:00:01.264) 0:03:33.637 ******** 2026-04-09 00:30:12.974784 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:12.974800 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:12.974815 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:12.974831 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:12.974846 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:12.974863 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:12.974931 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:12.974952 | orchestrator | 2026-04-09 00:30:12.974969 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-09 00:30:12.974986 | orchestrator | Thursday 09 April 2026 00:30:11 +0000 (0:00:00.550) 0:03:34.187 ******** 2026-04-09 00:30:12.975003 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:12.975019 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:12.975035 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:12.975052 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:12.975067 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:12.975082 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:12.975097 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:12.975113 | orchestrator | 2026-04-09 00:30:12.975129 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-09 00:30:12.975145 | orchestrator | Thursday 09 April 2026 00:30:11 +0000 (0:00:00.614) 0:03:34.801 ******** 2026-04-09 00:30:12.975161 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:12.975176 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:12.975192 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:12.975209 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:12.975224 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:12.975240 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:12.975256 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:12.975273 | orchestrator | 2026-04-09 00:30:12.975290 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-09 00:30:12.975306 | orchestrator | Thursday 09 April 2026 00:30:12 +0000 (0:00:00.630) 0:03:35.431 ******** 2026-04-09 00:30:12.975339 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693086.9768507, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:12.975374 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693117.52164, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:12.975393 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693131.2165675, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:12.975449 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693120.8190053, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315380 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693135.7565, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315456 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693115.7181342, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315462 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693121.882897, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315466 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315497 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315501 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315507 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315533 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315540 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315546 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:30:18.315553 | orchestrator | 2026-04-09 00:30:18.315559 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-09 00:30:18.315567 | orchestrator | Thursday 09 April 2026 00:30:13 +0000 (0:00:01.012) 0:03:36.444 ******** 2026-04-09 00:30:18.315573 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:18.315579 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:18.315585 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:18.315597 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:18.315603 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:18.315608 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:18.315614 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:18.315619 | orchestrator | 2026-04-09 00:30:18.315626 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-09 00:30:18.315631 | orchestrator | Thursday 09 April 2026 00:30:14 +0000 (0:00:01.104) 0:03:37.548 ******** 2026-04-09 00:30:18.315637 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:18.315643 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:18.315649 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:18.315656 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:18.315665 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:18.315671 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:18.315676 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:18.315682 | orchestrator | 2026-04-09 00:30:18.315687 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-09 00:30:18.315694 | orchestrator | Thursday 09 April 2026 00:30:15 +0000 (0:00:01.106) 0:03:38.655 ******** 2026-04-09 00:30:18.315700 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:18.315705 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:18.315710 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:18.315716 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:18.315722 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:18.315728 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:18.315734 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:18.315737 | orchestrator | 2026-04-09 00:30:18.315741 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-09 00:30:18.315745 | orchestrator | Thursday 09 April 2026 00:30:17 +0000 (0:00:01.333) 0:03:39.988 ******** 2026-04-09 00:30:18.315749 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:18.315753 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:18.315757 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:18.315760 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:18.315764 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:18.315768 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:18.315771 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:18.315775 | orchestrator | 2026-04-09 00:30:18.315779 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-09 00:30:18.315782 | orchestrator | Thursday 09 April 2026 00:30:17 +0000 (0:00:00.232) 0:03:40.221 ******** 2026-04-09 00:30:18.315786 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:18.315791 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:18.315795 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:18.315799 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:18.315802 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:18.315806 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:18.315810 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:18.315813 | orchestrator | 2026-04-09 00:30:18.315818 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-09 00:30:18.315824 | orchestrator | Thursday 09 April 2026 00:30:17 +0000 (0:00:00.696) 0:03:40.918 ******** 2026-04-09 00:30:18.315832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:30:18.315843 | orchestrator | 2026-04-09 00:30:18.315850 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-09 00:30:18.315862 | orchestrator | Thursday 09 April 2026 00:30:18 +0000 (0:00:00.373) 0:03:41.291 ******** 2026-04-09 00:31:32.978214 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:32.978313 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:32.978328 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:32.978340 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:32.978401 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:32.978415 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:32.978426 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:32.978437 | orchestrator | 2026-04-09 00:31:32.978449 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-09 00:31:32.978462 | orchestrator | Thursday 09 April 2026 00:30:27 +0000 (0:00:08.773) 0:03:50.064 ******** 2026-04-09 00:31:32.978474 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:32.978485 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:32.978498 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:32.978510 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:32.978521 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:32.978533 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:32.978544 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:32.978553 | orchestrator | 2026-04-09 00:31:32.978560 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-09 00:31:32.978567 | orchestrator | Thursday 09 April 2026 00:30:28 +0000 (0:00:01.365) 0:03:51.430 ******** 2026-04-09 00:31:32.978573 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:32.978580 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:32.978587 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:32.978593 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:32.978600 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:32.978606 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:32.978612 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:32.978619 | orchestrator | 2026-04-09 00:31:32.978645 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-09 00:31:32.978656 | orchestrator | Thursday 09 April 2026 00:30:29 +0000 (0:00:01.027) 0:03:52.457 ******** 2026-04-09 00:31:32.978666 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:32.978677 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:32.978688 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:32.978699 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:32.978711 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:32.978722 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:32.978733 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:32.978742 | orchestrator | 2026-04-09 00:31:32.978750 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-09 00:31:32.978759 | orchestrator | Thursday 09 April 2026 00:30:29 +0000 (0:00:00.275) 0:03:52.733 ******** 2026-04-09 00:31:32.978767 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:32.978775 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:32.978782 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:32.978790 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:32.978797 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:32.978838 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:32.978846 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:32.978852 | orchestrator | 2026-04-09 00:31:32.978860 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-09 00:31:32.978867 | orchestrator | Thursday 09 April 2026 00:30:30 +0000 (0:00:00.297) 0:03:53.030 ******** 2026-04-09 00:31:32.978874 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:32.978881 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:32.978889 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:32.978896 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:32.978904 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:32.978912 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:32.978919 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:32.978926 | orchestrator | 2026-04-09 00:31:32.978933 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-09 00:31:32.978941 | orchestrator | Thursday 09 April 2026 00:30:30 +0000 (0:00:00.277) 0:03:53.308 ******** 2026-04-09 00:31:32.978948 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:32.978955 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:32.978962 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:32.978969 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:32.978984 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:32.978991 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:32.978998 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:32.979005 | orchestrator | 2026-04-09 00:31:32.979012 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-09 00:31:32.979019 | orchestrator | Thursday 09 April 2026 00:30:35 +0000 (0:00:04.750) 0:03:58.058 ******** 2026-04-09 00:31:32.979029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:32.979038 | orchestrator | 2026-04-09 00:31:32.979044 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-09 00:31:32.979051 | orchestrator | Thursday 09 April 2026 00:30:35 +0000 (0:00:00.354) 0:03:58.412 ******** 2026-04-09 00:31:32.979057 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-09 00:31:32.979063 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-09 00:31:32.979069 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-09 00:31:32.979075 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-09 00:31:32.979081 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:32.979087 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-09 00:31:32.979093 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-09 00:31:32.979099 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:32.979105 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-09 00:31:32.979111 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-09 00:31:32.979117 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:32.979123 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-09 00:31:32.979129 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-09 00:31:32.979135 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:32.979141 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-09 00:31:32.979148 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:32.979169 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-09 00:31:32.979176 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:32.979182 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-09 00:31:32.979188 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-09 00:31:32.979195 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:32.979201 | orchestrator | 2026-04-09 00:31:32.979207 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-09 00:31:32.979213 | orchestrator | Thursday 09 April 2026 00:30:35 +0000 (0:00:00.303) 0:03:58.715 ******** 2026-04-09 00:31:32.979220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:32.979226 | orchestrator | 2026-04-09 00:31:32.979233 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-09 00:31:32.979239 | orchestrator | Thursday 09 April 2026 00:30:36 +0000 (0:00:00.455) 0:03:59.171 ******** 2026-04-09 00:31:32.979245 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-09 00:31:32.979251 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-09 00:31:32.979257 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:32.979264 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:32.979284 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-09 00:31:32.979290 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-09 00:31:32.979296 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:32.979307 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:32.979313 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-09 00:31:32.979319 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-09 00:31:32.979325 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:32.979332 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:32.979338 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-09 00:31:32.979344 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:32.979350 | orchestrator | 2026-04-09 00:31:32.979357 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-09 00:31:32.979368 | orchestrator | Thursday 09 April 2026 00:30:36 +0000 (0:00:00.308) 0:03:59.480 ******** 2026-04-09 00:31:32.979378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:32.979388 | orchestrator | 2026-04-09 00:31:32.979397 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-09 00:31:32.979408 | orchestrator | Thursday 09 April 2026 00:30:36 +0000 (0:00:00.398) 0:03:59.878 ******** 2026-04-09 00:31:32.979419 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:32.979435 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:32.979445 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:32.979454 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:32.979461 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:32.979467 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:32.979473 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:32.979479 | orchestrator | 2026-04-09 00:31:32.979485 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-09 00:31:32.979491 | orchestrator | Thursday 09 April 2026 00:31:08 +0000 (0:00:31.452) 0:04:31.331 ******** 2026-04-09 00:31:32.979497 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:32.979503 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:32.979509 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:32.979515 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:32.979521 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:32.979527 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:32.979533 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:32.979539 | orchestrator | 2026-04-09 00:31:32.979545 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-09 00:31:32.979552 | orchestrator | Thursday 09 April 2026 00:31:17 +0000 (0:00:08.709) 0:04:40.041 ******** 2026-04-09 00:31:32.979558 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:32.979564 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:32.979570 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:32.979576 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:32.979582 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:32.979588 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:32.979594 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:32.979600 | orchestrator | 2026-04-09 00:31:32.979606 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-09 00:31:32.979612 | orchestrator | Thursday 09 April 2026 00:31:24 +0000 (0:00:07.839) 0:04:47.880 ******** 2026-04-09 00:31:32.979618 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:32.979624 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:32.979630 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:32.979636 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:32.979642 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:32.979648 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:32.979654 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:32.979660 | orchestrator | 2026-04-09 00:31:32.979666 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-09 00:31:32.979681 | orchestrator | Thursday 09 April 2026 00:31:26 +0000 (0:00:01.777) 0:04:49.657 ******** 2026-04-09 00:31:32.979691 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:32.979702 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:32.979713 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:32.979723 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:32.979734 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:32.979744 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:32.979752 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:32.979758 | orchestrator | 2026-04-09 00:31:32.979769 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-09 00:31:44.048288 | orchestrator | Thursday 09 April 2026 00:31:32 +0000 (0:00:06.293) 0:04:55.950 ******** 2026-04-09 00:31:44.048388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:44.048402 | orchestrator | 2026-04-09 00:31:44.048412 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-09 00:31:44.048423 | orchestrator | Thursday 09 April 2026 00:31:33 +0000 (0:00:00.391) 0:04:56.342 ******** 2026-04-09 00:31:44.048432 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:44.048442 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:44.048451 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:44.048459 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:44.048467 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:44.048476 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:44.048484 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:44.048493 | orchestrator | 2026-04-09 00:31:44.048502 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-09 00:31:44.048510 | orchestrator | Thursday 09 April 2026 00:31:34 +0000 (0:00:00.739) 0:04:57.081 ******** 2026-04-09 00:31:44.048519 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:44.048528 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:44.048537 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:44.048545 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:44.048553 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:44.048562 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:44.048570 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:44.048579 | orchestrator | 2026-04-09 00:31:44.048587 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-09 00:31:44.048596 | orchestrator | Thursday 09 April 2026 00:31:35 +0000 (0:00:01.730) 0:04:58.812 ******** 2026-04-09 00:31:44.048604 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:44.048613 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:44.048621 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:44.048630 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:44.048638 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:44.048647 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:44.048655 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:44.048664 | orchestrator | 2026-04-09 00:31:44.048673 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-09 00:31:44.048681 | orchestrator | Thursday 09 April 2026 00:31:36 +0000 (0:00:00.729) 0:04:59.542 ******** 2026-04-09 00:31:44.048690 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:44.048698 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:44.048707 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:44.048715 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:44.048724 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:44.048732 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:44.048741 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:44.048749 | orchestrator | 2026-04-09 00:31:44.048757 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-09 00:31:44.048782 | orchestrator | Thursday 09 April 2026 00:31:36 +0000 (0:00:00.279) 0:04:59.821 ******** 2026-04-09 00:31:44.048845 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:44.048858 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:44.048869 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:44.048879 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:44.048889 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:44.048899 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:44.048909 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:44.048923 | orchestrator | 2026-04-09 00:31:44.048938 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-09 00:31:44.048953 | orchestrator | Thursday 09 April 2026 00:31:37 +0000 (0:00:00.390) 0:05:00.211 ******** 2026-04-09 00:31:44.048967 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:44.048982 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:44.048999 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:44.049014 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:44.049029 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:44.049039 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:44.049050 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:44.049060 | orchestrator | 2026-04-09 00:31:44.049070 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-09 00:31:44.049080 | orchestrator | Thursday 09 April 2026 00:31:37 +0000 (0:00:00.386) 0:05:00.597 ******** 2026-04-09 00:31:44.049090 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:44.049100 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:44.049110 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:44.049121 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:44.049132 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:44.049141 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:44.049151 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:44.049161 | orchestrator | 2026-04-09 00:31:44.049171 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-09 00:31:44.049182 | orchestrator | Thursday 09 April 2026 00:31:37 +0000 (0:00:00.249) 0:05:00.847 ******** 2026-04-09 00:31:44.049192 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:44.049203 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:44.049213 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:44.049223 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:44.049232 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:44.049240 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:44.049248 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:44.049257 | orchestrator | 2026-04-09 00:31:44.049265 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-09 00:31:44.049274 | orchestrator | Thursday 09 April 2026 00:31:38 +0000 (0:00:00.294) 0:05:01.141 ******** 2026-04-09 00:31:44.049282 | orchestrator | ok: [testbed-manager] =>  2026-04-09 00:31:44.049291 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:31:44.049299 | orchestrator | ok: [testbed-node-0] =>  2026-04-09 00:31:44.049308 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:31:44.049316 | orchestrator | ok: [testbed-node-1] =>  2026-04-09 00:31:44.049325 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:31:44.049333 | orchestrator | ok: [testbed-node-2] =>  2026-04-09 00:31:44.049342 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:31:44.049368 | orchestrator | ok: [testbed-node-3] =>  2026-04-09 00:31:44.049377 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:31:44.049386 | orchestrator | ok: [testbed-node-4] =>  2026-04-09 00:31:44.049394 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:31:44.049402 | orchestrator | ok: [testbed-node-5] =>  2026-04-09 00:31:44.049411 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:31:44.049419 | orchestrator | 2026-04-09 00:31:44.049428 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-09 00:31:44.049436 | orchestrator | Thursday 09 April 2026 00:31:38 +0000 (0:00:00.252) 0:05:01.394 ******** 2026-04-09 00:31:44.049445 | orchestrator | ok: [testbed-manager] =>  2026-04-09 00:31:44.049460 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:31:44.049469 | orchestrator | ok: [testbed-node-0] =>  2026-04-09 00:31:44.049478 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:31:44.049486 | orchestrator | ok: [testbed-node-1] =>  2026-04-09 00:31:44.049495 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:31:44.049503 | orchestrator | ok: [testbed-node-2] =>  2026-04-09 00:31:44.049512 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:31:44.049520 | orchestrator | ok: [testbed-node-3] =>  2026-04-09 00:31:44.049528 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:31:44.049537 | orchestrator | ok: [testbed-node-4] =>  2026-04-09 00:31:44.049545 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:31:44.049554 | orchestrator | ok: [testbed-node-5] =>  2026-04-09 00:31:44.049562 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:31:44.049570 | orchestrator | 2026-04-09 00:31:44.049579 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-09 00:31:44.049588 | orchestrator | Thursday 09 April 2026 00:31:38 +0000 (0:00:00.292) 0:05:01.686 ******** 2026-04-09 00:31:44.049596 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:44.049605 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:44.049613 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:44.049621 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:44.049629 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:44.049638 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:44.049646 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:44.049655 | orchestrator | 2026-04-09 00:31:44.049663 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-09 00:31:44.049672 | orchestrator | Thursday 09 April 2026 00:31:38 +0000 (0:00:00.275) 0:05:01.961 ******** 2026-04-09 00:31:44.049681 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:44.049689 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:44.049698 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:44.049706 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:44.049714 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:44.049723 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:44.049731 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:44.049740 | orchestrator | 2026-04-09 00:31:44.049748 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-09 00:31:44.049757 | orchestrator | Thursday 09 April 2026 00:31:39 +0000 (0:00:00.252) 0:05:02.214 ******** 2026-04-09 00:31:44.049773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:44.049784 | orchestrator | 2026-04-09 00:31:44.049821 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-09 00:31:44.049830 | orchestrator | Thursday 09 April 2026 00:31:39 +0000 (0:00:00.394) 0:05:02.608 ******** 2026-04-09 00:31:44.049839 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:44.049847 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:44.049856 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:44.049864 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:44.049873 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:44.049881 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:44.049889 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:44.049898 | orchestrator | 2026-04-09 00:31:44.049906 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-09 00:31:44.049918 | orchestrator | Thursday 09 April 2026 00:31:40 +0000 (0:00:00.801) 0:05:03.409 ******** 2026-04-09 00:31:44.049934 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:44.049950 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:44.049965 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:44.049981 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:44.049997 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:44.050077 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:44.050089 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:44.050098 | orchestrator | 2026-04-09 00:31:44.050107 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-09 00:31:44.050116 | orchestrator | Thursday 09 April 2026 00:31:43 +0000 (0:00:03.271) 0:05:06.681 ******** 2026-04-09 00:31:44.050125 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-09 00:31:44.050134 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-09 00:31:44.050143 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-09 00:31:44.050151 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-09 00:31:44.050160 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:44.050168 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-09 00:31:44.050177 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-09 00:31:44.050186 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-09 00:31:44.050194 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-09 00:31:44.050203 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-09 00:31:44.050211 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:44.050220 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-09 00:31:44.050229 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-09 00:31:44.050237 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-09 00:31:44.050246 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:44.050255 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-09 00:31:44.050271 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-09 00:32:46.254839 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-09 00:32:46.254939 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:46.254949 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-09 00:32:46.254956 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-09 00:32:46.254962 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:46.254968 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-09 00:32:46.254975 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:46.254980 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-09 00:32:46.254986 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-09 00:32:46.254992 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-09 00:32:46.254997 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:46.255004 | orchestrator | 2026-04-09 00:32:46.255011 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-09 00:32:46.255019 | orchestrator | Thursday 09 April 2026 00:31:44 +0000 (0:00:00.554) 0:05:07.236 ******** 2026-04-09 00:32:46.255025 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:46.255031 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255036 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255042 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255048 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255055 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255060 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255066 | orchestrator | 2026-04-09 00:32:46.255072 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-09 00:32:46.255078 | orchestrator | Thursday 09 April 2026 00:31:51 +0000 (0:00:06.835) 0:05:14.072 ******** 2026-04-09 00:32:46.255083 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255089 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:46.255094 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255100 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255106 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255111 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255139 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255145 | orchestrator | 2026-04-09 00:32:46.255151 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-09 00:32:46.255156 | orchestrator | Thursday 09 April 2026 00:31:52 +0000 (0:00:01.032) 0:05:15.104 ******** 2026-04-09 00:32:46.255162 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:46.255167 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255173 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255179 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255184 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255190 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255196 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255201 | orchestrator | 2026-04-09 00:32:46.255206 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-09 00:32:46.255212 | orchestrator | Thursday 09 April 2026 00:32:00 +0000 (0:00:08.345) 0:05:23.450 ******** 2026-04-09 00:32:46.255218 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:46.255223 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255228 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255247 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255253 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255258 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255263 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255269 | orchestrator | 2026-04-09 00:32:46.255274 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-09 00:32:46.255279 | orchestrator | Thursday 09 April 2026 00:32:04 +0000 (0:00:03.651) 0:05:27.101 ******** 2026-04-09 00:32:46.255284 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:46.255290 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255295 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255300 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255306 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255311 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255316 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255321 | orchestrator | 2026-04-09 00:32:46.255327 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-09 00:32:46.255333 | orchestrator | Thursday 09 April 2026 00:32:05 +0000 (0:00:01.338) 0:05:28.439 ******** 2026-04-09 00:32:46.255339 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:46.255344 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255350 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255355 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255361 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255366 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255374 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255381 | orchestrator | 2026-04-09 00:32:46.255387 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-09 00:32:46.255393 | orchestrator | Thursday 09 April 2026 00:32:06 +0000 (0:00:01.317) 0:05:29.756 ******** 2026-04-09 00:32:46.255399 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:46.255404 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:46.255411 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:46.255417 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:46.255423 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:46.255428 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:46.255433 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:46.255439 | orchestrator | 2026-04-09 00:32:46.255445 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-09 00:32:46.255452 | orchestrator | Thursday 09 April 2026 00:32:07 +0000 (0:00:00.558) 0:05:30.315 ******** 2026-04-09 00:32:46.255458 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:46.255465 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255471 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255476 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255490 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255498 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255504 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255509 | orchestrator | 2026-04-09 00:32:46.255515 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-09 00:32:46.255541 | orchestrator | Thursday 09 April 2026 00:32:17 +0000 (0:00:09.961) 0:05:40.277 ******** 2026-04-09 00:32:46.255547 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:46.255553 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255558 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255564 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255569 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255574 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255579 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255585 | orchestrator | 2026-04-09 00:32:46.255590 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-09 00:32:46.255596 | orchestrator | Thursday 09 April 2026 00:32:18 +0000 (0:00:01.106) 0:05:41.383 ******** 2026-04-09 00:32:46.255601 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:46.255606 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255611 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255617 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255622 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255627 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255633 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255638 | orchestrator | 2026-04-09 00:32:46.255643 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-09 00:32:46.255649 | orchestrator | Thursday 09 April 2026 00:32:28 +0000 (0:00:09.730) 0:05:51.113 ******** 2026-04-09 00:32:46.255654 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:46.255659 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255664 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255670 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255703 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255710 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255716 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255722 | orchestrator | 2026-04-09 00:32:46.255727 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-09 00:32:46.255733 | orchestrator | Thursday 09 April 2026 00:32:39 +0000 (0:00:11.332) 0:06:02.445 ******** 2026-04-09 00:32:46.255738 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-09 00:32:46.255744 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-09 00:32:46.255751 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-09 00:32:46.255756 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-09 00:32:46.255762 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-09 00:32:46.255767 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-09 00:32:46.255773 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-09 00:32:46.255778 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-09 00:32:46.255784 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-09 00:32:46.255789 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-09 00:32:46.255795 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-09 00:32:46.255801 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-09 00:32:46.255807 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-09 00:32:46.255812 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-09 00:32:46.255817 | orchestrator | 2026-04-09 00:32:46.255823 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-09 00:32:46.255828 | orchestrator | Thursday 09 April 2026 00:32:40 +0000 (0:00:01.245) 0:06:03.690 ******** 2026-04-09 00:32:46.255834 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:46.255847 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:46.255852 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:46.255857 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:46.255863 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:46.255869 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:46.255874 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:46.255880 | orchestrator | 2026-04-09 00:32:46.255885 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-09 00:32:46.255891 | orchestrator | Thursday 09 April 2026 00:32:41 +0000 (0:00:00.685) 0:06:04.376 ******** 2026-04-09 00:32:46.255896 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:46.255901 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:46.255907 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:46.255912 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:46.255918 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:46.255923 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:46.255929 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:46.255936 | orchestrator | 2026-04-09 00:32:46.255941 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-09 00:32:46.255948 | orchestrator | Thursday 09 April 2026 00:32:45 +0000 (0:00:04.106) 0:06:08.483 ******** 2026-04-09 00:32:46.255954 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:46.255959 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:46.255965 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:46.255972 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:46.255977 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:46.255982 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:46.255987 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:46.255993 | orchestrator | 2026-04-09 00:32:46.256037 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-09 00:32:46.256044 | orchestrator | Thursday 09 April 2026 00:32:45 +0000 (0:00:00.493) 0:06:08.976 ******** 2026-04-09 00:32:46.256050 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-09 00:32:46.256056 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-09 00:32:46.256061 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:46.256068 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-09 00:32:46.256074 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-09 00:32:46.256079 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:46.256085 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-09 00:32:46.256090 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-09 00:32:46.256097 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:46.256112 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-09 00:33:05.973319 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-09 00:33:05.973423 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-09 00:33:05.973438 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-09 00:33:05.973450 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:05.973461 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-09 00:33:05.973472 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-09 00:33:05.973483 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:05.973494 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:05.973505 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-09 00:33:05.973516 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-09 00:33:05.973526 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:05.973537 | orchestrator | 2026-04-09 00:33:05.973549 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-09 00:33:05.973585 | orchestrator | Thursday 09 April 2026 00:32:46 +0000 (0:00:00.522) 0:06:09.499 ******** 2026-04-09 00:33:05.973596 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:05.973607 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:05.973618 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:05.973629 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:05.973639 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:05.973650 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:05.973725 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:05.973736 | orchestrator | 2026-04-09 00:33:05.973748 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-09 00:33:05.973759 | orchestrator | Thursday 09 April 2026 00:32:47 +0000 (0:00:00.482) 0:06:09.982 ******** 2026-04-09 00:33:05.973769 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:05.973780 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:05.973791 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:05.973801 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:05.973812 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:05.973823 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:05.973833 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:05.973845 | orchestrator | 2026-04-09 00:33:05.973859 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-09 00:33:05.973873 | orchestrator | Thursday 09 April 2026 00:32:47 +0000 (0:00:00.648) 0:06:10.630 ******** 2026-04-09 00:33:05.973885 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:05.973898 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:05.973910 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:05.973922 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:05.973935 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:05.973948 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:05.973960 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:05.973973 | orchestrator | 2026-04-09 00:33:05.973986 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-09 00:33:05.973998 | orchestrator | Thursday 09 April 2026 00:32:48 +0000 (0:00:00.517) 0:06:11.147 ******** 2026-04-09 00:33:05.974085 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:05.974101 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:05.974114 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:05.974126 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:05.974136 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:05.974147 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:05.974158 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:05.974169 | orchestrator | 2026-04-09 00:33:05.974180 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-09 00:33:05.974191 | orchestrator | Thursday 09 April 2026 00:32:49 +0000 (0:00:01.723) 0:06:12.871 ******** 2026-04-09 00:33:05.974203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:05.974216 | orchestrator | 2026-04-09 00:33:05.974227 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-09 00:33:05.974238 | orchestrator | Thursday 09 April 2026 00:32:50 +0000 (0:00:00.822) 0:06:13.693 ******** 2026-04-09 00:33:05.974249 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:05.974260 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:05.974271 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:05.974281 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:05.974292 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:05.974304 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:05.974315 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:05.974325 | orchestrator | 2026-04-09 00:33:05.974336 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-09 00:33:05.974356 | orchestrator | Thursday 09 April 2026 00:32:51 +0000 (0:00:01.038) 0:06:14.732 ******** 2026-04-09 00:33:05.974367 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:05.974378 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:05.974388 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:05.974399 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:05.974409 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:05.974420 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:05.974431 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:05.974441 | orchestrator | 2026-04-09 00:33:05.974452 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-09 00:33:05.974463 | orchestrator | Thursday 09 April 2026 00:32:52 +0000 (0:00:00.874) 0:06:15.606 ******** 2026-04-09 00:33:05.974474 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:05.974484 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:05.974495 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:05.974506 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:05.974517 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:05.974527 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:05.974538 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:05.974549 | orchestrator | 2026-04-09 00:33:05.974559 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-09 00:33:05.974589 | orchestrator | Thursday 09 April 2026 00:32:54 +0000 (0:00:01.501) 0:06:17.108 ******** 2026-04-09 00:33:05.974601 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:05.974612 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:05.974623 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:05.974634 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:05.974644 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:05.974678 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:05.974690 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:05.974700 | orchestrator | 2026-04-09 00:33:05.974711 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-09 00:33:05.974722 | orchestrator | Thursday 09 April 2026 00:32:55 +0000 (0:00:01.567) 0:06:18.675 ******** 2026-04-09 00:33:05.974733 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:05.974744 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:05.974755 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:05.974766 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:05.974777 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:05.974788 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:05.974799 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:05.974809 | orchestrator | 2026-04-09 00:33:05.974820 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-09 00:33:05.974832 | orchestrator | Thursday 09 April 2026 00:32:57 +0000 (0:00:01.315) 0:06:19.990 ******** 2026-04-09 00:33:05.974842 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:05.974853 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:05.974865 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:05.974876 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:05.974886 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:05.974897 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:05.974908 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:05.974919 | orchestrator | 2026-04-09 00:33:05.974930 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-09 00:33:05.974942 | orchestrator | Thursday 09 April 2026 00:32:58 +0000 (0:00:01.662) 0:06:21.652 ******** 2026-04-09 00:33:05.974953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:05.974964 | orchestrator | 2026-04-09 00:33:05.974975 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-09 00:33:05.974986 | orchestrator | Thursday 09 April 2026 00:32:59 +0000 (0:00:00.887) 0:06:22.539 ******** 2026-04-09 00:33:05.975011 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:05.975022 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:05.975033 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:05.975044 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:05.975055 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:05.975066 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:05.975076 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:05.975087 | orchestrator | 2026-04-09 00:33:05.975098 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-09 00:33:05.975109 | orchestrator | Thursday 09 April 2026 00:33:01 +0000 (0:00:01.520) 0:06:24.060 ******** 2026-04-09 00:33:05.975120 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:05.975131 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:05.975142 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:05.975152 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:05.975163 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:05.975173 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:05.975184 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:05.975194 | orchestrator | 2026-04-09 00:33:05.975205 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-09 00:33:05.975216 | orchestrator | Thursday 09 April 2026 00:33:02 +0000 (0:00:01.380) 0:06:25.440 ******** 2026-04-09 00:33:05.975227 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:05.975238 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:05.975249 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:05.975259 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:05.975270 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:05.975281 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:05.975291 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:05.975302 | orchestrator | 2026-04-09 00:33:05.975313 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-09 00:33:05.975324 | orchestrator | Thursday 09 April 2026 00:33:03 +0000 (0:00:01.219) 0:06:26.660 ******** 2026-04-09 00:33:05.975335 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:05.975346 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:05.975357 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:05.975367 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:05.975378 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:05.975388 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:05.975399 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:05.975410 | orchestrator | 2026-04-09 00:33:05.975420 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-09 00:33:05.975431 | orchestrator | Thursday 09 April 2026 00:33:04 +0000 (0:00:01.136) 0:06:27.797 ******** 2026-04-09 00:33:05.975442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:05.975454 | orchestrator | 2026-04-09 00:33:05.975465 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:05.975476 | orchestrator | Thursday 09 April 2026 00:33:05 +0000 (0:00:00.882) 0:06:28.679 ******** 2026-04-09 00:33:05.975486 | orchestrator | 2026-04-09 00:33:05.975497 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:05.975508 | orchestrator | Thursday 09 April 2026 00:33:05 +0000 (0:00:00.041) 0:06:28.721 ******** 2026-04-09 00:33:05.975519 | orchestrator | 2026-04-09 00:33:05.975530 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:05.975541 | orchestrator | Thursday 09 April 2026 00:33:05 +0000 (0:00:00.182) 0:06:28.903 ******** 2026-04-09 00:33:05.975551 | orchestrator | 2026-04-09 00:33:05.975562 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:05.975580 | orchestrator | Thursday 09 April 2026 00:33:05 +0000 (0:00:00.041) 0:06:28.945 ******** 2026-04-09 00:33:32.068808 | orchestrator | 2026-04-09 00:33:32.068925 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:32.069002 | orchestrator | Thursday 09 April 2026 00:33:06 +0000 (0:00:00.040) 0:06:28.986 ******** 2026-04-09 00:33:32.069015 | orchestrator | 2026-04-09 00:33:32.069027 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:32.069038 | orchestrator | Thursday 09 April 2026 00:33:06 +0000 (0:00:00.046) 0:06:29.032 ******** 2026-04-09 00:33:32.069057 | orchestrator | 2026-04-09 00:33:32.069087 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:33:32.069108 | orchestrator | Thursday 09 April 2026 00:33:06 +0000 (0:00:00.038) 0:06:29.071 ******** 2026-04-09 00:33:32.069127 | orchestrator | 2026-04-09 00:33:32.069145 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 00:33:32.069164 | orchestrator | Thursday 09 April 2026 00:33:06 +0000 (0:00:00.041) 0:06:29.113 ******** 2026-04-09 00:33:32.069182 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:32.069200 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:32.069217 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:32.069237 | orchestrator | 2026-04-09 00:33:32.069255 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-09 00:33:32.069274 | orchestrator | Thursday 09 April 2026 00:33:07 +0000 (0:00:01.173) 0:06:30.286 ******** 2026-04-09 00:33:32.069292 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:32.069311 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:32.069330 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:32.069348 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:32.069367 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:32.069387 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:32.069408 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:32.069429 | orchestrator | 2026-04-09 00:33:32.069451 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-09 00:33:32.069471 | orchestrator | Thursday 09 April 2026 00:33:08 +0000 (0:00:01.318) 0:06:31.605 ******** 2026-04-09 00:33:32.069491 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:32.069532 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:32.069566 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:32.069584 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:32.069681 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:32.069701 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:32.069718 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:32.069733 | orchestrator | 2026-04-09 00:33:32.069751 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-09 00:33:32.069769 | orchestrator | Thursday 09 April 2026 00:33:09 +0000 (0:00:01.180) 0:06:32.786 ******** 2026-04-09 00:33:32.069787 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:32.069803 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:32.069821 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:32.069839 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:32.069856 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:32.069872 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:32.069890 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:32.069907 | orchestrator | 2026-04-09 00:33:32.069947 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-09 00:33:32.069969 | orchestrator | Thursday 09 April 2026 00:33:12 +0000 (0:00:02.436) 0:06:35.222 ******** 2026-04-09 00:33:32.069988 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:32.070007 | orchestrator | 2026-04-09 00:33:32.070105 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-09 00:33:32.070127 | orchestrator | Thursday 09 April 2026 00:33:12 +0000 (0:00:00.101) 0:06:35.323 ******** 2026-04-09 00:33:32.070146 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:32.070167 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:32.070187 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:32.070207 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:32.070228 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:32.070272 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:32.070292 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:32.070313 | orchestrator | 2026-04-09 00:33:32.070333 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-09 00:33:32.070355 | orchestrator | Thursday 09 April 2026 00:33:13 +0000 (0:00:01.187) 0:06:36.510 ******** 2026-04-09 00:33:32.070375 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:32.070394 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:32.070413 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:32.070434 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:32.070455 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:32.070474 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:32.070494 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:32.070515 | orchestrator | 2026-04-09 00:33:32.070536 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-09 00:33:32.070558 | orchestrator | Thursday 09 April 2026 00:33:14 +0000 (0:00:00.511) 0:06:37.022 ******** 2026-04-09 00:33:32.070579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:32.070634 | orchestrator | 2026-04-09 00:33:32.070655 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-09 00:33:32.070675 | orchestrator | Thursday 09 April 2026 00:33:14 +0000 (0:00:00.869) 0:06:37.891 ******** 2026-04-09 00:33:32.070693 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:32.070711 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:32.070729 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:32.070746 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:32.070763 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:32.070781 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:32.070800 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:32.070818 | orchestrator | 2026-04-09 00:33:32.070835 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-09 00:33:32.070853 | orchestrator | Thursday 09 April 2026 00:33:15 +0000 (0:00:00.991) 0:06:38.883 ******** 2026-04-09 00:33:32.070870 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-09 00:33:32.070923 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-09 00:33:32.070944 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-09 00:33:32.070998 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-09 00:33:32.071022 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-09 00:33:32.071043 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-09 00:33:32.071061 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-09 00:33:32.071081 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-09 00:33:32.071102 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-09 00:33:32.071121 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-09 00:33:32.071139 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-09 00:33:32.071158 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-09 00:33:32.071178 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-09 00:33:32.071198 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-09 00:33:32.071220 | orchestrator | 2026-04-09 00:33:32.071240 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-09 00:33:32.071260 | orchestrator | Thursday 09 April 2026 00:33:18 +0000 (0:00:02.445) 0:06:41.329 ******** 2026-04-09 00:33:32.071281 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:32.071302 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:32.071322 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:32.071344 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:32.071384 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:32.071407 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:32.071428 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:32.071448 | orchestrator | 2026-04-09 00:33:32.071469 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-09 00:33:32.071489 | orchestrator | Thursday 09 April 2026 00:33:18 +0000 (0:00:00.473) 0:06:41.802 ******** 2026-04-09 00:33:32.071512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:32.071537 | orchestrator | 2026-04-09 00:33:32.071558 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-09 00:33:32.071578 | orchestrator | Thursday 09 April 2026 00:33:19 +0000 (0:00:00.895) 0:06:42.698 ******** 2026-04-09 00:33:32.071628 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:32.071648 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:32.071668 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:32.071688 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:32.071707 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:32.071727 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:32.071746 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:32.071767 | orchestrator | 2026-04-09 00:33:32.071799 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-09 00:33:32.071819 | orchestrator | Thursday 09 April 2026 00:33:20 +0000 (0:00:00.887) 0:06:43.586 ******** 2026-04-09 00:33:32.071840 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:32.071860 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:32.071879 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:32.071898 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:32.071918 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:32.071937 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:32.071958 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:32.071976 | orchestrator | 2026-04-09 00:33:32.071996 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-09 00:33:32.072015 | orchestrator | Thursday 09 April 2026 00:33:21 +0000 (0:00:00.803) 0:06:44.389 ******** 2026-04-09 00:33:32.072033 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:32.072050 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:32.072067 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:32.072086 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:32.072105 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:32.072123 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:32.072175 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:32.072195 | orchestrator | 2026-04-09 00:33:32.072213 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-09 00:33:32.072232 | orchestrator | Thursday 09 April 2026 00:33:21 +0000 (0:00:00.507) 0:06:44.896 ******** 2026-04-09 00:33:32.072253 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:32.072271 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:32.072289 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:32.072308 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:32.072328 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:32.072347 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:32.072368 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:32.072388 | orchestrator | 2026-04-09 00:33:32.072408 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-09 00:33:32.072430 | orchestrator | Thursday 09 April 2026 00:33:23 +0000 (0:00:01.501) 0:06:46.398 ******** 2026-04-09 00:33:32.072451 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:33:32.072472 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:33:32.072492 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:33:32.072513 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:33:32.072534 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:33:32.072572 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:33:32.072617 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:33:32.072636 | orchestrator | 2026-04-09 00:33:32.072653 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-09 00:33:32.072670 | orchestrator | Thursday 09 April 2026 00:33:24 +0000 (0:00:00.647) 0:06:47.046 ******** 2026-04-09 00:33:32.072688 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:32.072705 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:32.072723 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:32.072741 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:32.072760 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:32.072779 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:32.072817 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:04.193009 | orchestrator | 2026-04-09 00:34:04.193090 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-09 00:34:04.193098 | orchestrator | Thursday 09 April 2026 00:33:32 +0000 (0:00:08.070) 0:06:55.116 ******** 2026-04-09 00:34:04.193104 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193109 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:04.193114 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:04.193119 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:04.193124 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:04.193128 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:04.193133 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:04.193137 | orchestrator | 2026-04-09 00:34:04.193141 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-09 00:34:04.193146 | orchestrator | Thursday 09 April 2026 00:33:33 +0000 (0:00:01.331) 0:06:56.447 ******** 2026-04-09 00:34:04.193150 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193154 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:04.193158 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:04.193162 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:04.193166 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:04.193170 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:04.193175 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:04.193179 | orchestrator | 2026-04-09 00:34:04.193183 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-09 00:34:04.193198 | orchestrator | Thursday 09 April 2026 00:33:35 +0000 (0:00:01.805) 0:06:58.253 ******** 2026-04-09 00:34:04.193208 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193212 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:04.193216 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:04.193221 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:04.193225 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:04.193229 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:04.193233 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:04.193237 | orchestrator | 2026-04-09 00:34:04.193241 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 00:34:04.193246 | orchestrator | Thursday 09 April 2026 00:33:37 +0000 (0:00:01.739) 0:06:59.993 ******** 2026-04-09 00:34:04.193250 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193254 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:04.193258 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:04.193262 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:04.193267 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:04.193271 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:04.193275 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:04.193279 | orchestrator | 2026-04-09 00:34:04.193283 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 00:34:04.193287 | orchestrator | Thursday 09 April 2026 00:33:37 +0000 (0:00:00.805) 0:07:00.798 ******** 2026-04-09 00:34:04.193291 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:04.193296 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:04.193300 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:04.193324 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:04.193329 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:04.193333 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:04.193337 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:04.193342 | orchestrator | 2026-04-09 00:34:04.193346 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-09 00:34:04.193351 | orchestrator | Thursday 09 April 2026 00:33:38 +0000 (0:00:00.758) 0:07:01.557 ******** 2026-04-09 00:34:04.193355 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:04.193359 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:04.193363 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:04.193368 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:04.193372 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:04.193376 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:04.193380 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:04.193384 | orchestrator | 2026-04-09 00:34:04.193388 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-09 00:34:04.193393 | orchestrator | Thursday 09 April 2026 00:33:39 +0000 (0:00:00.627) 0:07:02.184 ******** 2026-04-09 00:34:04.193397 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193401 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:04.193405 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:04.193409 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:04.193413 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:04.193417 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:04.193421 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:04.193425 | orchestrator | 2026-04-09 00:34:04.193430 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-09 00:34:04.193434 | orchestrator | Thursday 09 April 2026 00:33:39 +0000 (0:00:00.504) 0:07:02.689 ******** 2026-04-09 00:34:04.193438 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193442 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:04.193446 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:04.193450 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:04.193454 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:04.193458 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:04.193462 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:04.193466 | orchestrator | 2026-04-09 00:34:04.193470 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-09 00:34:04.193474 | orchestrator | Thursday 09 April 2026 00:33:40 +0000 (0:00:00.515) 0:07:03.205 ******** 2026-04-09 00:34:04.193479 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193483 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:04.193487 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:04.193491 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:04.193495 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:04.193499 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:04.193503 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:04.193507 | orchestrator | 2026-04-09 00:34:04.193511 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-09 00:34:04.193515 | orchestrator | Thursday 09 April 2026 00:33:40 +0000 (0:00:00.507) 0:07:03.713 ******** 2026-04-09 00:34:04.193519 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193524 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:04.193528 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:04.193532 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:04.193536 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:04.193540 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:04.193591 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:04.193599 | orchestrator | 2026-04-09 00:34:04.193618 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-09 00:34:04.193623 | orchestrator | Thursday 09 April 2026 00:33:46 +0000 (0:00:05.524) 0:07:09.237 ******** 2026-04-09 00:34:04.193628 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:04.193633 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:04.193643 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:04.193648 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:04.193653 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:04.193658 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:04.193664 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:04.193671 | orchestrator | 2026-04-09 00:34:04.193678 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-09 00:34:04.193684 | orchestrator | Thursday 09 April 2026 00:33:46 +0000 (0:00:00.683) 0:07:09.921 ******** 2026-04-09 00:34:04.193693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:04.193702 | orchestrator | 2026-04-09 00:34:04.193708 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-09 00:34:04.193715 | orchestrator | Thursday 09 April 2026 00:33:47 +0000 (0:00:00.776) 0:07:10.697 ******** 2026-04-09 00:34:04.193721 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193728 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:04.193736 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:04.193742 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:04.193749 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:04.193757 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:04.193761 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:04.193766 | orchestrator | 2026-04-09 00:34:04.193771 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-09 00:34:04.193776 | orchestrator | Thursday 09 April 2026 00:33:49 +0000 (0:00:01.971) 0:07:12.668 ******** 2026-04-09 00:34:04.193781 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193786 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:04.193790 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:04.193795 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:04.193800 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:04.193805 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:04.193809 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:04.193814 | orchestrator | 2026-04-09 00:34:04.193819 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-09 00:34:04.193824 | orchestrator | Thursday 09 April 2026 00:33:50 +0000 (0:00:01.263) 0:07:13.931 ******** 2026-04-09 00:34:04.193829 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:04.193834 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:04.193838 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:04.193843 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:04.193848 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:04.193853 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:04.193857 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:04.193862 | orchestrator | 2026-04-09 00:34:04.193867 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-09 00:34:04.193876 | orchestrator | Thursday 09 April 2026 00:33:51 +0000 (0:00:00.829) 0:07:14.761 ******** 2026-04-09 00:34:04.193881 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:04.193888 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:04.193893 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:04.193898 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:04.193904 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:04.193909 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:04.193918 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:34:04.193923 | orchestrator | 2026-04-09 00:34:04.193928 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-09 00:34:04.193933 | orchestrator | Thursday 09 April 2026 00:33:53 +0000 (0:00:01.739) 0:07:16.500 ******** 2026-04-09 00:34:04.193939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:04.193943 | orchestrator | 2026-04-09 00:34:04.193947 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-09 00:34:04.193951 | orchestrator | Thursday 09 April 2026 00:33:54 +0000 (0:00:00.950) 0:07:17.451 ******** 2026-04-09 00:34:04.193955 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:04.193960 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:04.193964 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:04.193968 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:04.193972 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:04.193976 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:04.193980 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:04.193984 | orchestrator | 2026-04-09 00:34:04.193992 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-09 00:34:33.935459 | orchestrator | Thursday 09 April 2026 00:34:04 +0000 (0:00:09.715) 0:07:27.167 ******** 2026-04-09 00:34:33.935700 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:33.935724 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:33.935736 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:33.935747 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:33.935757 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:33.935768 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:33.935779 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:33.935790 | orchestrator | 2026-04-09 00:34:33.935802 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-09 00:34:33.935813 | orchestrator | Thursday 09 April 2026 00:34:05 +0000 (0:00:01.715) 0:07:28.882 ******** 2026-04-09 00:34:33.935824 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:33.935835 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:33.935846 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:33.935857 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:33.935867 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:33.935878 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:33.935889 | orchestrator | 2026-04-09 00:34:33.935900 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-09 00:34:33.935911 | orchestrator | Thursday 09 April 2026 00:34:07 +0000 (0:00:01.741) 0:07:30.623 ******** 2026-04-09 00:34:33.935925 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:33.935938 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:33.935951 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:33.935964 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:33.935977 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:33.935989 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:33.936002 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:33.936014 | orchestrator | 2026-04-09 00:34:33.936027 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-09 00:34:33.936041 | orchestrator | 2026-04-09 00:34:33.936055 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-09 00:34:33.936068 | orchestrator | Thursday 09 April 2026 00:34:08 +0000 (0:00:01.224) 0:07:31.848 ******** 2026-04-09 00:34:33.936080 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:33.936092 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:33.936137 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:33.936151 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:33.936163 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:33.936175 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:33.936187 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:33.936199 | orchestrator | 2026-04-09 00:34:33.936212 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-09 00:34:33.936225 | orchestrator | 2026-04-09 00:34:33.936237 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-09 00:34:33.936250 | orchestrator | Thursday 09 April 2026 00:34:09 +0000 (0:00:00.504) 0:07:32.352 ******** 2026-04-09 00:34:33.936261 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:33.936272 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:33.936283 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:33.936293 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:33.936305 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:33.936316 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:33.936342 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:33.936353 | orchestrator | 2026-04-09 00:34:33.936364 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-09 00:34:33.936375 | orchestrator | Thursday 09 April 2026 00:34:10 +0000 (0:00:01.307) 0:07:33.660 ******** 2026-04-09 00:34:33.936386 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:33.936396 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:33.936407 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:33.936417 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:33.936428 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:33.936439 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:33.936449 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:33.936460 | orchestrator | 2026-04-09 00:34:33.936479 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-09 00:34:33.936497 | orchestrator | Thursday 09 April 2026 00:34:12 +0000 (0:00:01.587) 0:07:35.247 ******** 2026-04-09 00:34:33.936542 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:33.936553 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:33.936564 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:33.936575 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:33.936586 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:33.936605 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:33.936625 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:33.936637 | orchestrator | 2026-04-09 00:34:33.936648 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-09 00:34:33.936659 | orchestrator | Thursday 09 April 2026 00:34:12 +0000 (0:00:00.460) 0:07:35.708 ******** 2026-04-09 00:34:33.936670 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:33.936685 | orchestrator | 2026-04-09 00:34:33.936704 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-09 00:34:33.936720 | orchestrator | Thursday 09 April 2026 00:34:13 +0000 (0:00:00.806) 0:07:36.515 ******** 2026-04-09 00:34:33.936733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:33.936746 | orchestrator | 2026-04-09 00:34:33.936757 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-09 00:34:33.936768 | orchestrator | Thursday 09 April 2026 00:34:14 +0000 (0:00:00.920) 0:07:37.435 ******** 2026-04-09 00:34:33.936779 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:33.936790 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:33.936800 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:33.936811 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:33.936821 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:33.936842 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:33.936853 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:33.936863 | orchestrator | 2026-04-09 00:34:33.936896 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-09 00:34:33.936908 | orchestrator | Thursday 09 April 2026 00:34:23 +0000 (0:00:09.071) 0:07:46.507 ******** 2026-04-09 00:34:33.936919 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:33.936929 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:33.936940 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:33.936951 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:33.936962 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:33.936972 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:33.936983 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:33.936994 | orchestrator | 2026-04-09 00:34:33.937005 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-09 00:34:33.937016 | orchestrator | Thursday 09 April 2026 00:34:24 +0000 (0:00:00.764) 0:07:47.272 ******** 2026-04-09 00:34:33.937027 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:33.937038 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:33.937048 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:33.937059 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:33.937070 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:33.937081 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:33.937091 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:33.937102 | orchestrator | 2026-04-09 00:34:33.937118 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-09 00:34:33.937137 | orchestrator | Thursday 09 April 2026 00:34:25 +0000 (0:00:01.229) 0:07:48.501 ******** 2026-04-09 00:34:33.937155 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:33.937169 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:33.937180 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:33.937191 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:33.937201 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:33.937212 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:33.937223 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:33.937233 | orchestrator | 2026-04-09 00:34:33.937244 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-09 00:34:33.937255 | orchestrator | Thursday 09 April 2026 00:34:27 +0000 (0:00:02.152) 0:07:50.654 ******** 2026-04-09 00:34:33.937266 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:33.937276 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:33.937287 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:33.937298 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:33.937308 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:33.937319 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:33.937330 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:33.937343 | orchestrator | 2026-04-09 00:34:33.937361 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-09 00:34:33.937381 | orchestrator | Thursday 09 April 2026 00:34:28 +0000 (0:00:01.159) 0:07:51.814 ******** 2026-04-09 00:34:33.937394 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:33.937405 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:33.937416 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:33.937427 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:33.937437 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:33.937448 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:33.937466 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:33.937477 | orchestrator | 2026-04-09 00:34:33.937488 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-09 00:34:33.937523 | orchestrator | 2026-04-09 00:34:33.937542 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-09 00:34:33.937560 | orchestrator | Thursday 09 April 2026 00:34:29 +0000 (0:00:01.047) 0:07:52.861 ******** 2026-04-09 00:34:33.937592 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:33.937611 | orchestrator | 2026-04-09 00:34:33.937631 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-09 00:34:33.937644 | orchestrator | Thursday 09 April 2026 00:34:30 +0000 (0:00:00.752) 0:07:53.614 ******** 2026-04-09 00:34:33.937655 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:33.937666 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:33.937676 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:33.937687 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:33.937698 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:33.937709 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:33.937719 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:33.937730 | orchestrator | 2026-04-09 00:34:33.937741 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-09 00:34:33.937752 | orchestrator | Thursday 09 April 2026 00:34:31 +0000 (0:00:00.669) 0:07:54.283 ******** 2026-04-09 00:34:33.937763 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:33.937774 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:33.937785 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:33.937798 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:33.937817 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:33.937836 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:33.937851 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:33.937862 | orchestrator | 2026-04-09 00:34:33.937873 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-09 00:34:33.937884 | orchestrator | Thursday 09 April 2026 00:34:32 +0000 (0:00:01.101) 0:07:55.385 ******** 2026-04-09 00:34:33.937895 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:33.937906 | orchestrator | 2026-04-09 00:34:33.937917 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-09 00:34:33.937928 | orchestrator | Thursday 09 April 2026 00:34:33 +0000 (0:00:00.748) 0:07:56.133 ******** 2026-04-09 00:34:33.937938 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:33.937949 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:33.937960 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:33.937971 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:33.937981 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:33.937992 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:33.938002 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:33.938080 | orchestrator | 2026-04-09 00:34:33.938107 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-09 00:34:35.349184 | orchestrator | Thursday 09 April 2026 00:34:33 +0000 (0:00:00.776) 0:07:56.910 ******** 2026-04-09 00:34:35.349291 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:35.349302 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:35.349309 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:35.349316 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:35.349322 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:35.349329 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:35.350237 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:35.350272 | orchestrator | 2026-04-09 00:34:35.350286 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:34:35.350299 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-09 00:34:35.350312 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:34:35.350324 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-09 00:34:35.350362 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-09 00:34:35.350374 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:34:35.350384 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:34:35.350394 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:34:35.350405 | orchestrator | 2026-04-09 00:34:35.350415 | orchestrator | 2026-04-09 00:34:35.350425 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:34:35.350437 | orchestrator | Thursday 09 April 2026 00:34:35 +0000 (0:00:01.230) 0:07:58.140 ******** 2026-04-09 00:34:35.350448 | orchestrator | =============================================================================== 2026-04-09 00:34:35.350459 | orchestrator | osism.commons.packages : Install required packages --------------------- 70.28s 2026-04-09 00:34:35.350471 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.43s 2026-04-09 00:34:35.350482 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 31.45s 2026-04-09 00:34:35.350528 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.62s 2026-04-09 00:34:35.350543 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.33s 2026-04-09 00:34:35.350554 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.25s 2026-04-09 00:34:35.350566 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.22s 2026-04-09 00:34:35.350576 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.96s 2026-04-09 00:34:35.350586 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.73s 2026-04-09 00:34:35.350595 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.72s 2026-04-09 00:34:35.350605 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.07s 2026-04-09 00:34:35.350615 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.77s 2026-04-09 00:34:35.350627 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.71s 2026-04-09 00:34:35.350639 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.35s 2026-04-09 00:34:35.350650 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.07s 2026-04-09 00:34:35.350659 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.84s 2026-04-09 00:34:35.350669 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.95s 2026-04-09 00:34:35.350680 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.84s 2026-04-09 00:34:35.350691 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.29s 2026-04-09 00:34:35.350700 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.69s 2026-04-09 00:34:35.476947 | orchestrator | + osism apply fail2ban 2026-04-09 00:34:47.085472 | orchestrator | 2026-04-09 00:34:47 | INFO  | Prepare task for execution of fail2ban. 2026-04-09 00:34:47.168670 | orchestrator | 2026-04-09 00:34:47 | INFO  | Task 0afaf9b4-d3c0-4880-a97b-8048d16f9651 (fail2ban) was prepared for execution. 2026-04-09 00:34:47.168758 | orchestrator | 2026-04-09 00:34:47 | INFO  | It takes a moment until task 0afaf9b4-d3c0-4880-a97b-8048d16f9651 (fail2ban) has been started and output is visible here. 2026-04-09 00:35:07.814795 | orchestrator | 2026-04-09 00:35:07.814901 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-09 00:35:07.814943 | orchestrator | 2026-04-09 00:35:07.814955 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-09 00:35:07.814965 | orchestrator | Thursday 09 April 2026 00:34:50 +0000 (0:00:00.320) 0:00:00.320 ******** 2026-04-09 00:35:07.814977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:35:07.814989 | orchestrator | 2026-04-09 00:35:07.814999 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-09 00:35:07.815009 | orchestrator | Thursday 09 April 2026 00:34:51 +0000 (0:00:01.124) 0:00:01.445 ******** 2026-04-09 00:35:07.815019 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:07.815030 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:07.815039 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:07.815048 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:07.815058 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:07.815067 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:07.815077 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:07.815086 | orchestrator | 2026-04-09 00:35:07.815095 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-09 00:35:07.815105 | orchestrator | Thursday 09 April 2026 00:35:02 +0000 (0:00:11.243) 0:00:12.689 ******** 2026-04-09 00:35:07.815114 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:07.815124 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:07.815133 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:07.815155 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:07.815166 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:07.815175 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:07.815184 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:07.815194 | orchestrator | 2026-04-09 00:35:07.815203 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-09 00:35:07.815213 | orchestrator | Thursday 09 April 2026 00:35:04 +0000 (0:00:01.692) 0:00:14.381 ******** 2026-04-09 00:35:07.815223 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:07.815233 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:07.815242 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:07.815252 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:07.815261 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:07.815271 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:07.815280 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:07.815289 | orchestrator | 2026-04-09 00:35:07.815299 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-09 00:35:07.815309 | orchestrator | Thursday 09 April 2026 00:35:05 +0000 (0:00:01.269) 0:00:15.651 ******** 2026-04-09 00:35:07.815318 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:07.815328 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:07.815339 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:07.815352 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:07.815363 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:07.815374 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:07.815385 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:07.815398 | orchestrator | 2026-04-09 00:35:07.815409 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:35:07.815434 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:07.815447 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:07.815483 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:07.815494 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:07.815517 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:07.815528 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:07.815540 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:07.815551 | orchestrator | 2026-04-09 00:35:07.815563 | orchestrator | 2026-04-09 00:35:07.815574 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:35:07.815586 | orchestrator | Thursday 09 April 2026 00:35:07 +0000 (0:00:01.638) 0:00:17.290 ******** 2026-04-09 00:35:07.815598 | orchestrator | =============================================================================== 2026-04-09 00:35:07.815609 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.24s 2026-04-09 00:35:07.815620 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.69s 2026-04-09 00:35:07.815631 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.64s 2026-04-09 00:35:07.815643 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.27s 2026-04-09 00:35:07.815654 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.12s 2026-04-09 00:35:08.001324 | orchestrator | + osism apply network 2026-04-09 00:35:19.332973 | orchestrator | 2026-04-09 00:35:19 | INFO  | Prepare task for execution of network. 2026-04-09 00:35:19.411930 | orchestrator | 2026-04-09 00:35:19 | INFO  | Task 3e529fb4-6399-4140-a4d5-3c4372a45cbc (network) was prepared for execution. 2026-04-09 00:35:19.412040 | orchestrator | 2026-04-09 00:35:19 | INFO  | It takes a moment until task 3e529fb4-6399-4140-a4d5-3c4372a45cbc (network) has been started and output is visible here. 2026-04-09 00:35:48.208199 | orchestrator | 2026-04-09 00:35:48.208314 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-09 00:35:48.208330 | orchestrator | 2026-04-09 00:35:48.208342 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-09 00:35:48.208355 | orchestrator | Thursday 09 April 2026 00:35:22 +0000 (0:00:00.315) 0:00:00.315 ******** 2026-04-09 00:35:48.208367 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.208379 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:48.208390 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:48.208401 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:48.208476 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:48.208489 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:48.208500 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:48.208511 | orchestrator | 2026-04-09 00:35:48.208522 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-09 00:35:48.208533 | orchestrator | Thursday 09 April 2026 00:35:23 +0000 (0:00:00.594) 0:00:00.910 ******** 2026-04-09 00:35:48.208546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:35:48.208560 | orchestrator | 2026-04-09 00:35:48.208572 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-09 00:35:48.208583 | orchestrator | Thursday 09 April 2026 00:35:24 +0000 (0:00:01.157) 0:00:02.067 ******** 2026-04-09 00:35:48.208594 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.208604 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:48.208615 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:48.208626 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:48.208636 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:48.208647 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:48.208687 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:48.208698 | orchestrator | 2026-04-09 00:35:48.208709 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-09 00:35:48.208720 | orchestrator | Thursday 09 April 2026 00:35:26 +0000 (0:00:02.550) 0:00:04.618 ******** 2026-04-09 00:35:48.208733 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.208745 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:48.208759 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:48.208772 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:48.208784 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:48.208796 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:48.208809 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:48.208821 | orchestrator | 2026-04-09 00:35:48.208833 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-09 00:35:48.208847 | orchestrator | Thursday 09 April 2026 00:35:28 +0000 (0:00:01.600) 0:00:06.218 ******** 2026-04-09 00:35:48.208860 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-09 00:35:48.208873 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-09 00:35:48.208886 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-09 00:35:48.208899 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-09 00:35:48.208912 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-09 00:35:48.208926 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-09 00:35:48.208939 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-09 00:35:48.208952 | orchestrator | 2026-04-09 00:35:48.208963 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-09 00:35:48.208974 | orchestrator | Thursday 09 April 2026 00:35:29 +0000 (0:00:01.219) 0:00:07.437 ******** 2026-04-09 00:35:48.208985 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:35:48.208997 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:35:48.209007 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:35:48.209018 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:35:48.209029 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:35:48.209039 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:35:48.209050 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:35:48.209061 | orchestrator | 2026-04-09 00:35:48.209072 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-09 00:35:48.209084 | orchestrator | Thursday 09 April 2026 00:35:30 +0000 (0:00:00.623) 0:00:08.061 ******** 2026-04-09 00:35:48.209095 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:35:48.209105 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:35:48.209116 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:35:48.209127 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:35:48.209138 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:35:48.209148 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:35:48.209159 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:35:48.209170 | orchestrator | 2026-04-09 00:35:48.209199 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-09 00:35:48.209211 | orchestrator | Thursday 09 April 2026 00:35:31 +0000 (0:00:00.818) 0:00:08.879 ******** 2026-04-09 00:35:48.209222 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:35:48.209232 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:35:48.209243 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:35:48.209253 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:35:48.209264 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:35:48.209275 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:35:48.209285 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:35:48.209296 | orchestrator | 2026-04-09 00:35:48.209307 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-09 00:35:48.209317 | orchestrator | Thursday 09 April 2026 00:35:31 +0000 (0:00:00.756) 0:00:09.636 ******** 2026-04-09 00:35:48.209328 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 00:35:48.209347 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:35:48.209358 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 00:35:48.209368 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 00:35:48.209379 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 00:35:48.209389 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:35:48.209400 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 00:35:48.209427 | orchestrator | 2026-04-09 00:35:48.209457 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-09 00:35:48.209469 | orchestrator | Thursday 09 April 2026 00:35:35 +0000 (0:00:03.462) 0:00:13.099 ******** 2026-04-09 00:35:48.209480 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:48.209491 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:48.209501 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:48.209512 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:48.209523 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:48.209533 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:48.209544 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:48.209554 | orchestrator | 2026-04-09 00:35:48.209565 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-09 00:35:48.209576 | orchestrator | Thursday 09 April 2026 00:35:36 +0000 (0:00:01.521) 0:00:14.620 ******** 2026-04-09 00:35:48.209587 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:35:48.209597 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 00:35:48.209608 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:35:48.209619 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 00:35:48.209629 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 00:35:48.209640 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 00:35:48.209650 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 00:35:48.209661 | orchestrator | 2026-04-09 00:35:48.209671 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-09 00:35:48.209682 | orchestrator | Thursday 09 April 2026 00:35:38 +0000 (0:00:01.801) 0:00:16.422 ******** 2026-04-09 00:35:48.209693 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.209703 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:48.209714 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:48.209724 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:48.209735 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:48.209745 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:48.209756 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:48.209766 | orchestrator | 2026-04-09 00:35:48.209777 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-09 00:35:48.209788 | orchestrator | Thursday 09 April 2026 00:35:39 +0000 (0:00:01.092) 0:00:17.514 ******** 2026-04-09 00:35:48.209799 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:35:48.209810 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:35:48.209821 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:35:48.209831 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:35:48.209842 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:35:48.209853 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:35:48.209863 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:35:48.209874 | orchestrator | 2026-04-09 00:35:48.209885 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-09 00:35:48.209896 | orchestrator | Thursday 09 April 2026 00:35:40 +0000 (0:00:00.660) 0:00:18.174 ******** 2026-04-09 00:35:48.209906 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.209917 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:48.209928 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:48.209938 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:48.209949 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:48.209959 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:48.209970 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:48.209980 | orchestrator | 2026-04-09 00:35:48.209997 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-09 00:35:48.210079 | orchestrator | Thursday 09 April 2026 00:35:42 +0000 (0:00:02.285) 0:00:20.460 ******** 2026-04-09 00:35:48.210095 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:35:48.210107 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:35:48.210118 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:35:48.210129 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:35:48.210140 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:35:48.210151 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:35:48.210162 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-09 00:35:48.210174 | orchestrator | 2026-04-09 00:35:48.210185 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-09 00:35:48.210196 | orchestrator | Thursday 09 April 2026 00:35:43 +0000 (0:00:00.939) 0:00:21.400 ******** 2026-04-09 00:35:48.210207 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.210218 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:48.210229 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:48.210240 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:48.210251 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:48.210261 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:48.210272 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:48.210283 | orchestrator | 2026-04-09 00:35:48.210294 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-09 00:35:48.210305 | orchestrator | Thursday 09 April 2026 00:35:45 +0000 (0:00:01.619) 0:00:23.020 ******** 2026-04-09 00:35:48.210317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:35:48.210329 | orchestrator | 2026-04-09 00:35:48.210340 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-09 00:35:48.210351 | orchestrator | Thursday 09 April 2026 00:35:46 +0000 (0:00:01.217) 0:00:24.237 ******** 2026-04-09 00:35:48.210362 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.210372 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:48.210383 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:48.210394 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:48.210405 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:48.210433 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:48.210444 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:48.210455 | orchestrator | 2026-04-09 00:35:48.210466 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-09 00:35:48.210477 | orchestrator | Thursday 09 April 2026 00:35:47 +0000 (0:00:01.134) 0:00:25.372 ******** 2026-04-09 00:35:48.210488 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.210499 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:48.210509 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:48.210520 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:48.210531 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:48.210550 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:03.989378 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:03.989522 | orchestrator | 2026-04-09 00:36:03.989538 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-09 00:36:03.989551 | orchestrator | Thursday 09 April 2026 00:35:48 +0000 (0:00:00.623) 0:00:25.995 ******** 2026-04-09 00:36:03.989561 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:03.989571 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:03.989581 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:03.989591 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:03.989601 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:03.989641 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:03.989651 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:03.989661 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:03.989671 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:03.989680 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:03.989690 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:03.989700 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:36:03.989709 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:03.989722 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:36:03.989738 | orchestrator | 2026-04-09 00:36:03.989754 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-09 00:36:03.989770 | orchestrator | Thursday 09 April 2026 00:35:49 +0000 (0:00:01.195) 0:00:27.191 ******** 2026-04-09 00:36:03.989793 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:03.989810 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:03.989826 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:03.989841 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:03.989856 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:03.989872 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:03.989887 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:03.989903 | orchestrator | 2026-04-09 00:36:03.989920 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-09 00:36:03.989937 | orchestrator | Thursday 09 April 2026 00:35:50 +0000 (0:00:00.607) 0:00:27.798 ******** 2026-04-09 00:36:03.989966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:36:03.989981 | orchestrator | 2026-04-09 00:36:03.989993 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-09 00:36:03.990004 | orchestrator | Thursday 09 April 2026 00:35:54 +0000 (0:00:04.078) 0:00:31.877 ******** 2026-04-09 00:36:03.990068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990083 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-09 00:36:03.990097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990122 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-09 00:36:03.990134 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-09 00:36:03.990228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-09 00:36:03.990272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-09 00:36:03.990283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-09 00:36:03.990294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-09 00:36:03.990305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-09 00:36:03.990316 | orchestrator | 2026-04-09 00:36:03.990333 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-09 00:36:03.990344 | orchestrator | Thursday 09 April 2026 00:35:59 +0000 (0:00:05.208) 0:00:37.085 ******** 2026-04-09 00:36:03.990355 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-09 00:36:03.990367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990438 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-09 00:36:03.990450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:03.990468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-09 00:36:15.512465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:36:15.512572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-09 00:36:15.512597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-09 00:36:15.512617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-09 00:36:15.512635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-09 00:36:15.512653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-09 00:36:15.512672 | orchestrator | 2026-04-09 00:36:15.512692 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-09 00:36:15.512766 | orchestrator | Thursday 09 April 2026 00:36:04 +0000 (0:00:05.503) 0:00:42.589 ******** 2026-04-09 00:36:15.512805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:36:15.512825 | orchestrator | 2026-04-09 00:36:15.512842 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-09 00:36:15.512860 | orchestrator | Thursday 09 April 2026 00:36:06 +0000 (0:00:01.268) 0:00:43.857 ******** 2026-04-09 00:36:15.512877 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:15.512894 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:15.512911 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:15.512927 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:15.512943 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:15.512959 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:15.512979 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:15.512997 | orchestrator | 2026-04-09 00:36:15.513043 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-09 00:36:15.513062 | orchestrator | Thursday 09 April 2026 00:36:07 +0000 (0:00:00.940) 0:00:44.798 ******** 2026-04-09 00:36:15.513079 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:15.513095 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:15.513111 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:15.513127 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:15.513144 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:15.513162 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:15.513178 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:15.513196 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:15.513213 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:15.513231 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:15.513249 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:15.513266 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:15.513284 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:15.513301 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:15.513317 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:15.513333 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:15.513348 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:15.513365 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:15.513424 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:15.513444 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:15.513462 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:15.513480 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:15.513498 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:15.513516 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:15.513527 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:15.513537 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:15.513546 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:15.513556 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:15.513565 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:15.513575 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:15.513584 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:36:15.513594 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:36:15.513603 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:36:15.513613 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:36:15.513622 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:15.513632 | orchestrator | 2026-04-09 00:36:15.513641 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-09 00:36:15.513661 | orchestrator | Thursday 09 April 2026 00:36:08 +0000 (0:00:00.926) 0:00:45.725 ******** 2026-04-09 00:36:15.513671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:36:15.513681 | orchestrator | 2026-04-09 00:36:15.513691 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-09 00:36:15.513700 | orchestrator | Thursday 09 April 2026 00:36:09 +0000 (0:00:01.217) 0:00:46.942 ******** 2026-04-09 00:36:15.513710 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:15.513720 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:15.513774 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:15.513787 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:15.513797 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:15.513808 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:15.513818 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:15.513828 | orchestrator | 2026-04-09 00:36:15.513838 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-09 00:36:15.513848 | orchestrator | Thursday 09 April 2026 00:36:09 +0000 (0:00:00.535) 0:00:47.477 ******** 2026-04-09 00:36:15.513858 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:15.513868 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:15.513878 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:15.513888 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:15.513897 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:15.513907 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:15.513917 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:15.513927 | orchestrator | 2026-04-09 00:36:15.513937 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-09 00:36:15.513947 | orchestrator | Thursday 09 April 2026 00:36:10 +0000 (0:00:00.664) 0:00:48.142 ******** 2026-04-09 00:36:15.513955 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:15.513963 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:15.513971 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:15.513979 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:15.513987 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:15.513996 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:15.514004 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:15.514012 | orchestrator | 2026-04-09 00:36:15.514065 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-09 00:36:15.514073 | orchestrator | Thursday 09 April 2026 00:36:11 +0000 (0:00:00.541) 0:00:48.683 ******** 2026-04-09 00:36:15.514082 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:15.514090 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:15.514099 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:15.514107 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:15.514115 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:15.514123 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:15.514132 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:15.514140 | orchestrator | 2026-04-09 00:36:15.514148 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-09 00:36:15.514157 | orchestrator | Thursday 09 April 2026 00:36:12 +0000 (0:00:01.656) 0:00:50.340 ******** 2026-04-09 00:36:15.514165 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:15.514173 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:15.514181 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:15.514190 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:15.514198 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:15.514206 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:15.514214 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:15.514223 | orchestrator | 2026-04-09 00:36:15.514231 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-09 00:36:15.514239 | orchestrator | Thursday 09 April 2026 00:36:13 +0000 (0:00:00.976) 0:00:51.317 ******** 2026-04-09 00:36:15.514255 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:15.514263 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:15.514271 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:15.514279 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:15.514288 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:15.514296 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:15.514304 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:15.514312 | orchestrator | 2026-04-09 00:36:15.514329 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-09 00:36:16.889850 | orchestrator | Thursday 09 April 2026 00:36:15 +0000 (0:00:01.859) 0:00:53.176 ******** 2026-04-09 00:36:16.889967 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:16.889992 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:16.890010 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:16.890093 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:16.890112 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:16.890129 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:16.890148 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:16.890166 | orchestrator | 2026-04-09 00:36:16.890186 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-09 00:36:16.890225 | orchestrator | Thursday 09 April 2026 00:36:16 +0000 (0:00:00.706) 0:00:53.882 ******** 2026-04-09 00:36:16.890242 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:36:16.890259 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:16.890275 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:16.890292 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:16.890308 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:16.890324 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:16.890340 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:16.890357 | orchestrator | 2026-04-09 00:36:16.890417 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:36:16.890438 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 00:36:16.890458 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:16.890476 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:16.890494 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:16.890512 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:16.890529 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:16.890546 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:36:16.890563 | orchestrator | 2026-04-09 00:36:16.890586 | orchestrator | 2026-04-09 00:36:16.890605 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:36:16.890622 | orchestrator | Thursday 09 April 2026 00:36:16 +0000 (0:00:00.455) 0:00:54.338 ******** 2026-04-09 00:36:16.890639 | orchestrator | =============================================================================== 2026-04-09 00:36:16.890656 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.50s 2026-04-09 00:36:16.890672 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.21s 2026-04-09 00:36:16.890689 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.08s 2026-04-09 00:36:16.890740 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.46s 2026-04-09 00:36:16.890759 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.55s 2026-04-09 00:36:16.890773 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.29s 2026-04-09 00:36:16.890789 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 1.86s 2026-04-09 00:36:16.890805 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.80s 2026-04-09 00:36:16.890820 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.66s 2026-04-09 00:36:16.890835 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.62s 2026-04-09 00:36:16.890852 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.60s 2026-04-09 00:36:16.890867 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.52s 2026-04-09 00:36:16.890883 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.27s 2026-04-09 00:36:16.890898 | orchestrator | osism.commons.network : Create required directories --------------------- 1.22s 2026-04-09 00:36:16.890914 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.22s 2026-04-09 00:36:16.890931 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2026-04-09 00:36:16.890948 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.20s 2026-04-09 00:36:16.890963 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.16s 2026-04-09 00:36:16.890981 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.13s 2026-04-09 00:36:16.890997 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2026-04-09 00:36:17.001164 | orchestrator | + osism apply wireguard 2026-04-09 00:36:28.123874 | orchestrator | 2026-04-09 00:36:28 | INFO  | Prepare task for execution of wireguard. 2026-04-09 00:36:28.201057 | orchestrator | 2026-04-09 00:36:28 | INFO  | Task 86720de1-6634-4bc0-ad35-1a3fba9e8d8d (wireguard) was prepared for execution. 2026-04-09 00:36:28.201128 | orchestrator | 2026-04-09 00:36:28 | INFO  | It takes a moment until task 86720de1-6634-4bc0-ad35-1a3fba9e8d8d (wireguard) has been started and output is visible here. 2026-04-09 00:36:45.932917 | orchestrator | 2026-04-09 00:36:45.933006 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-09 00:36:45.933018 | orchestrator | 2026-04-09 00:36:45.933028 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-09 00:36:45.933037 | orchestrator | Thursday 09 April 2026 00:36:31 +0000 (0:00:00.265) 0:00:00.265 ******** 2026-04-09 00:36:45.933046 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:45.933055 | orchestrator | 2026-04-09 00:36:45.933063 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-09 00:36:45.933071 | orchestrator | Thursday 09 April 2026 00:36:32 +0000 (0:00:01.486) 0:00:01.752 ******** 2026-04-09 00:36:45.933079 | orchestrator | changed: [testbed-manager] 2026-04-09 00:36:45.933088 | orchestrator | 2026-04-09 00:36:45.933096 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-09 00:36:45.933104 | orchestrator | Thursday 09 April 2026 00:36:38 +0000 (0:00:05.630) 0:00:07.383 ******** 2026-04-09 00:36:45.933112 | orchestrator | changed: [testbed-manager] 2026-04-09 00:36:45.933120 | orchestrator | 2026-04-09 00:36:45.933128 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-09 00:36:45.933155 | orchestrator | Thursday 09 April 2026 00:36:39 +0000 (0:00:00.522) 0:00:07.905 ******** 2026-04-09 00:36:45.933163 | orchestrator | changed: [testbed-manager] 2026-04-09 00:36:45.933171 | orchestrator | 2026-04-09 00:36:45.933179 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-09 00:36:45.933187 | orchestrator | Thursday 09 April 2026 00:36:39 +0000 (0:00:00.439) 0:00:08.345 ******** 2026-04-09 00:36:45.933195 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:45.933221 | orchestrator | 2026-04-09 00:36:45.933230 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-09 00:36:45.933238 | orchestrator | Thursday 09 April 2026 00:36:40 +0000 (0:00:00.513) 0:00:08.858 ******** 2026-04-09 00:36:45.933246 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:45.933254 | orchestrator | 2026-04-09 00:36:45.933261 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-09 00:36:45.933269 | orchestrator | Thursday 09 April 2026 00:36:40 +0000 (0:00:00.393) 0:00:09.252 ******** 2026-04-09 00:36:45.933277 | orchestrator | ok: [testbed-manager] 2026-04-09 00:36:45.933285 | orchestrator | 2026-04-09 00:36:45.933293 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-09 00:36:45.933306 | orchestrator | Thursday 09 April 2026 00:36:40 +0000 (0:00:00.419) 0:00:09.671 ******** 2026-04-09 00:36:45.933314 | orchestrator | changed: [testbed-manager] 2026-04-09 00:36:45.933322 | orchestrator | 2026-04-09 00:36:45.933329 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-09 00:36:45.933337 | orchestrator | Thursday 09 April 2026 00:36:42 +0000 (0:00:01.139) 0:00:10.811 ******** 2026-04-09 00:36:45.933381 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:36:45.933389 | orchestrator | changed: [testbed-manager] 2026-04-09 00:36:45.933397 | orchestrator | 2026-04-09 00:36:45.933405 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-09 00:36:45.933413 | orchestrator | Thursday 09 April 2026 00:36:42 +0000 (0:00:00.915) 0:00:11.726 ******** 2026-04-09 00:36:45.933421 | orchestrator | changed: [testbed-manager] 2026-04-09 00:36:45.933429 | orchestrator | 2026-04-09 00:36:45.933436 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-09 00:36:45.933444 | orchestrator | Thursday 09 April 2026 00:36:44 +0000 (0:00:01.901) 0:00:13.627 ******** 2026-04-09 00:36:45.933452 | orchestrator | changed: [testbed-manager] 2026-04-09 00:36:45.933461 | orchestrator | 2026-04-09 00:36:45.933470 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:36:45.933480 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:36:45.933490 | orchestrator | 2026-04-09 00:36:45.933498 | orchestrator | 2026-04-09 00:36:45.933508 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:36:45.933518 | orchestrator | Thursday 09 April 2026 00:36:45 +0000 (0:00:00.879) 0:00:14.507 ******** 2026-04-09 00:36:45.933527 | orchestrator | =============================================================================== 2026-04-09 00:36:45.933536 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.63s 2026-04-09 00:36:45.933545 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.90s 2026-04-09 00:36:45.933554 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.49s 2026-04-09 00:36:45.933563 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2026-04-09 00:36:45.933572 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2026-04-09 00:36:45.933581 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.88s 2026-04-09 00:36:45.933590 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.52s 2026-04-09 00:36:45.933599 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2026-04-09 00:36:45.933608 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-04-09 00:36:45.933618 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-04-09 00:36:45.933627 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-04-09 00:36:46.098780 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-09 00:36:46.130566 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-09 00:36:46.130686 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-09 00:36:46.205247 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 189 0 --:--:-- --:--:-- --:--:-- 191 2026-04-09 00:36:46.219420 | orchestrator | + osism apply --environment custom workarounds 2026-04-09 00:36:47.389489 | orchestrator | 2026-04-09 00:36:47 | INFO  | Trying to run play workarounds in environment custom 2026-04-09 00:36:57.463900 | orchestrator | 2026-04-09 00:36:57 | INFO  | Prepare task for execution of workarounds. 2026-04-09 00:36:57.531128 | orchestrator | 2026-04-09 00:36:57 | INFO  | Task 0a47e9e5-a3bc-4d43-966a-87f35f619dab (workarounds) was prepared for execution. 2026-04-09 00:36:57.531446 | orchestrator | 2026-04-09 00:36:57 | INFO  | It takes a moment until task 0a47e9e5-a3bc-4d43-966a-87f35f619dab (workarounds) has been started and output is visible here. 2026-04-09 00:37:21.940675 | orchestrator | 2026-04-09 00:37:21.940792 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:37:21.940810 | orchestrator | 2026-04-09 00:37:21.940822 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-09 00:37:21.940835 | orchestrator | Thursday 09 April 2026 00:37:00 +0000 (0:00:00.177) 0:00:00.177 ******** 2026-04-09 00:37:21.940846 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-09 00:37:21.940858 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-09 00:37:21.940868 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-09 00:37:21.940879 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-09 00:37:21.940890 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-09 00:37:21.940901 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-09 00:37:21.940912 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-09 00:37:21.940923 | orchestrator | 2026-04-09 00:37:21.940934 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-09 00:37:21.940946 | orchestrator | 2026-04-09 00:37:21.940957 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-09 00:37:21.940967 | orchestrator | Thursday 09 April 2026 00:37:01 +0000 (0:00:00.699) 0:00:00.877 ******** 2026-04-09 00:37:21.940978 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:21.940990 | orchestrator | 2026-04-09 00:37:21.941017 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-09 00:37:21.941028 | orchestrator | 2026-04-09 00:37:21.941053 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-09 00:37:21.941064 | orchestrator | Thursday 09 April 2026 00:37:03 +0000 (0:00:02.573) 0:00:03.450 ******** 2026-04-09 00:37:21.941075 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:37:21.941086 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:37:21.941097 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:37:21.941107 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:37:21.941118 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:37:21.941129 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:37:21.941139 | orchestrator | 2026-04-09 00:37:21.941150 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-09 00:37:21.941161 | orchestrator | 2026-04-09 00:37:21.941172 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-09 00:37:21.941183 | orchestrator | Thursday 09 April 2026 00:37:06 +0000 (0:00:02.406) 0:00:05.857 ******** 2026-04-09 00:37:21.941195 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:21.941207 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:21.941220 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:21.941256 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:21.941270 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:21.941282 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:37:21.941294 | orchestrator | 2026-04-09 00:37:21.941338 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-09 00:37:21.941352 | orchestrator | Thursday 09 April 2026 00:37:07 +0000 (0:00:01.317) 0:00:07.174 ******** 2026-04-09 00:37:21.941365 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:21.941379 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:21.941391 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:21.941405 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:21.941418 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:21.941431 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:21.941444 | orchestrator | 2026-04-09 00:37:21.941457 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-09 00:37:21.941471 | orchestrator | Thursday 09 April 2026 00:37:11 +0000 (0:00:03.741) 0:00:10.916 ******** 2026-04-09 00:37:21.941483 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:37:21.941496 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:37:21.941509 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:37:21.941521 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:37:21.941535 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:37:21.941547 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:37:21.941561 | orchestrator | 2026-04-09 00:37:21.941574 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-09 00:37:21.941585 | orchestrator | 2026-04-09 00:37:21.941596 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-09 00:37:21.941607 | orchestrator | Thursday 09 April 2026 00:37:11 +0000 (0:00:00.523) 0:00:11.440 ******** 2026-04-09 00:37:21.941618 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:21.941628 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:21.941639 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:21.941650 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:21.941661 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:21.941672 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:21.941683 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:21.941693 | orchestrator | 2026-04-09 00:37:21.941704 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-09 00:37:21.941715 | orchestrator | Thursday 09 April 2026 00:37:13 +0000 (0:00:01.749) 0:00:13.190 ******** 2026-04-09 00:37:21.941726 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:21.941737 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:21.941747 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:21.941758 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:21.941769 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:21.941779 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:21.941808 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:21.941819 | orchestrator | 2026-04-09 00:37:21.941830 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-09 00:37:21.941841 | orchestrator | Thursday 09 April 2026 00:37:15 +0000 (0:00:01.494) 0:00:14.684 ******** 2026-04-09 00:37:21.941851 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:21.941862 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:37:21.941872 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:37:21.941883 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:37:21.941893 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:37:21.941904 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:37:21.941914 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:37:21.941924 | orchestrator | 2026-04-09 00:37:21.941944 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-09 00:37:21.941954 | orchestrator | Thursday 09 April 2026 00:37:16 +0000 (0:00:01.531) 0:00:16.215 ******** 2026-04-09 00:37:21.941965 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:21.941976 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:21.941987 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:21.941997 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:21.942008 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:21.942076 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:21.942095 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:21.942113 | orchestrator | 2026-04-09 00:37:21.942131 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-09 00:37:21.942150 | orchestrator | Thursday 09 April 2026 00:37:18 +0000 (0:00:01.515) 0:00:17.731 ******** 2026-04-09 00:37:21.942169 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:37:21.942187 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:37:21.942209 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:37:21.942220 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:37:21.942231 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:37:21.942241 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:37:21.942252 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:37:21.942262 | orchestrator | 2026-04-09 00:37:21.942273 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-09 00:37:21.942284 | orchestrator | 2026-04-09 00:37:21.942295 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-09 00:37:21.942331 | orchestrator | Thursday 09 April 2026 00:37:18 +0000 (0:00:00.662) 0:00:18.394 ******** 2026-04-09 00:37:21.942343 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:21.942353 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:37:21.942364 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:37:21.942374 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:37:21.942385 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:37:21.942395 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:37:21.942406 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:37:21.942416 | orchestrator | 2026-04-09 00:37:21.942427 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:37:21.942440 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:37:21.942452 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:21.942463 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:21.942474 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:21.942485 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:21.942495 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:21.942506 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:21.942516 | orchestrator | 2026-04-09 00:37:21.942527 | orchestrator | 2026-04-09 00:37:21.942538 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:37:21.942549 | orchestrator | Thursday 09 April 2026 00:37:21 +0000 (0:00:03.068) 0:00:21.462 ******** 2026-04-09 00:37:21.942559 | orchestrator | =============================================================================== 2026-04-09 00:37:21.942578 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.74s 2026-04-09 00:37:21.942589 | orchestrator | Install python3-docker -------------------------------------------------- 3.07s 2026-04-09 00:37:21.942600 | orchestrator | Apply netplan configuration --------------------------------------------- 2.57s 2026-04-09 00:37:21.942610 | orchestrator | Apply netplan configuration --------------------------------------------- 2.41s 2026-04-09 00:37:21.942621 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.75s 2026-04-09 00:37:21.942631 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.53s 2026-04-09 00:37:21.942642 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.52s 2026-04-09 00:37:21.942652 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.49s 2026-04-09 00:37:21.942663 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.32s 2026-04-09 00:37:21.942673 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.70s 2026-04-09 00:37:21.942684 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-04-09 00:37:21.942704 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.52s 2026-04-09 00:37:22.320227 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-09 00:37:33.649740 | orchestrator | 2026-04-09 00:37:33 | INFO  | Prepare task for execution of reboot. 2026-04-09 00:37:33.725980 | orchestrator | 2026-04-09 00:37:33 | INFO  | Task de9b8bb0-9ca8-42b8-91ae-6fe1aeef007e (reboot) was prepared for execution. 2026-04-09 00:37:33.726140 | orchestrator | 2026-04-09 00:37:33 | INFO  | It takes a moment until task de9b8bb0-9ca8-42b8-91ae-6fe1aeef007e (reboot) has been started and output is visible here. 2026-04-09 00:37:44.717718 | orchestrator | 2026-04-09 00:37:44.717802 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:44.717814 | orchestrator | 2026-04-09 00:37:44.717821 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:44.717829 | orchestrator | Thursday 09 April 2026 00:37:37 +0000 (0:00:00.248) 0:00:00.248 ******** 2026-04-09 00:37:44.717836 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:37:44.717843 | orchestrator | 2026-04-09 00:37:44.717850 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:44.717857 | orchestrator | Thursday 09 April 2026 00:37:37 +0000 (0:00:00.151) 0:00:00.400 ******** 2026-04-09 00:37:44.717864 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:44.717871 | orchestrator | 2026-04-09 00:37:44.717893 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:44.717900 | orchestrator | Thursday 09 April 2026 00:37:38 +0000 (0:00:01.308) 0:00:01.709 ******** 2026-04-09 00:37:44.717907 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:37:44.717913 | orchestrator | 2026-04-09 00:37:44.717920 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:44.717926 | orchestrator | 2026-04-09 00:37:44.717933 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:44.717940 | orchestrator | Thursday 09 April 2026 00:37:38 +0000 (0:00:00.101) 0:00:01.810 ******** 2026-04-09 00:37:44.717947 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:37:44.717953 | orchestrator | 2026-04-09 00:37:44.717960 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:44.717967 | orchestrator | Thursday 09 April 2026 00:37:38 +0000 (0:00:00.090) 0:00:01.900 ******** 2026-04-09 00:37:44.717973 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:44.717980 | orchestrator | 2026-04-09 00:37:44.717987 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:44.717994 | orchestrator | Thursday 09 April 2026 00:37:39 +0000 (0:00:01.006) 0:00:02.906 ******** 2026-04-09 00:37:44.718000 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:37:44.718007 | orchestrator | 2026-04-09 00:37:44.718088 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:44.718098 | orchestrator | 2026-04-09 00:37:44.718105 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:44.718112 | orchestrator | Thursday 09 April 2026 00:37:39 +0000 (0:00:00.097) 0:00:03.003 ******** 2026-04-09 00:37:44.718119 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:37:44.718126 | orchestrator | 2026-04-09 00:37:44.718133 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:44.718139 | orchestrator | Thursday 09 April 2026 00:37:39 +0000 (0:00:00.087) 0:00:03.091 ******** 2026-04-09 00:37:44.718146 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:44.718153 | orchestrator | 2026-04-09 00:37:44.718160 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:44.718166 | orchestrator | Thursday 09 April 2026 00:37:40 +0000 (0:00:00.997) 0:00:04.089 ******** 2026-04-09 00:37:44.718173 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:37:44.718180 | orchestrator | 2026-04-09 00:37:44.718187 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:44.718193 | orchestrator | 2026-04-09 00:37:44.718200 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:44.718207 | orchestrator | Thursday 09 April 2026 00:37:40 +0000 (0:00:00.090) 0:00:04.180 ******** 2026-04-09 00:37:44.718214 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:37:44.718221 | orchestrator | 2026-04-09 00:37:44.718227 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:44.718234 | orchestrator | Thursday 09 April 2026 00:37:41 +0000 (0:00:00.099) 0:00:04.279 ******** 2026-04-09 00:37:44.718241 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:44.718247 | orchestrator | 2026-04-09 00:37:44.718254 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:44.718260 | orchestrator | Thursday 09 April 2026 00:37:42 +0000 (0:00:01.004) 0:00:05.284 ******** 2026-04-09 00:37:44.718268 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:37:44.718275 | orchestrator | 2026-04-09 00:37:44.718346 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:44.718353 | orchestrator | 2026-04-09 00:37:44.718361 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:44.718368 | orchestrator | Thursday 09 April 2026 00:37:42 +0000 (0:00:00.118) 0:00:05.403 ******** 2026-04-09 00:37:44.718375 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:37:44.718383 | orchestrator | 2026-04-09 00:37:44.718389 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:44.718397 | orchestrator | Thursday 09 April 2026 00:37:42 +0000 (0:00:00.092) 0:00:05.495 ******** 2026-04-09 00:37:44.718404 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:44.718411 | orchestrator | 2026-04-09 00:37:44.718418 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:44.718425 | orchestrator | Thursday 09 April 2026 00:37:43 +0000 (0:00:01.064) 0:00:06.559 ******** 2026-04-09 00:37:44.718433 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:37:44.718440 | orchestrator | 2026-04-09 00:37:44.718446 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:37:44.718453 | orchestrator | 2026-04-09 00:37:44.718460 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:37:44.718468 | orchestrator | Thursday 09 April 2026 00:37:43 +0000 (0:00:00.101) 0:00:06.661 ******** 2026-04-09 00:37:44.718475 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:37:44.718482 | orchestrator | 2026-04-09 00:37:44.718490 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:37:44.718496 | orchestrator | Thursday 09 April 2026 00:37:43 +0000 (0:00:00.099) 0:00:06.761 ******** 2026-04-09 00:37:44.718504 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:44.718512 | orchestrator | 2026-04-09 00:37:44.718519 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:37:44.718532 | orchestrator | Thursday 09 April 2026 00:37:44 +0000 (0:00:01.005) 0:00:07.766 ******** 2026-04-09 00:37:44.718554 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:37:44.718561 | orchestrator | 2026-04-09 00:37:44.718568 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:37:44.718576 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:44.718586 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:44.718598 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:44.718605 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:44.718612 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:44.718619 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:37:44.718626 | orchestrator | 2026-04-09 00:37:44.718633 | orchestrator | 2026-04-09 00:37:44.718640 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:37:44.718647 | orchestrator | Thursday 09 April 2026 00:37:44 +0000 (0:00:00.030) 0:00:07.796 ******** 2026-04-09 00:37:44.718653 | orchestrator | =============================================================================== 2026-04-09 00:37:44.718660 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.39s 2026-04-09 00:37:44.718667 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.62s 2026-04-09 00:37:44.718673 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2026-04-09 00:37:44.833156 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-09 00:37:56.042181 | orchestrator | 2026-04-09 00:37:56 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-09 00:37:56.123638 | orchestrator | 2026-04-09 00:37:56 | INFO  | Task 26c9be42-c2dc-4817-be64-829d25a676d6 (wait-for-connection) was prepared for execution. 2026-04-09 00:37:56.123704 | orchestrator | 2026-04-09 00:37:56 | INFO  | It takes a moment until task 26c9be42-c2dc-4817-be64-829d25a676d6 (wait-for-connection) has been started and output is visible here. 2026-04-09 00:38:10.638711 | orchestrator | 2026-04-09 00:38:10.638831 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-09 00:38:10.638846 | orchestrator | 2026-04-09 00:38:10.638856 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-09 00:38:10.638865 | orchestrator | Thursday 09 April 2026 00:37:59 +0000 (0:00:00.231) 0:00:00.231 ******** 2026-04-09 00:38:10.638874 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:38:10.638884 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:38:10.638893 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:38:10.638902 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:38:10.638911 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:38:10.638920 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:38:10.638929 | orchestrator | 2026-04-09 00:38:10.638938 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:38:10.638947 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:10.638958 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:10.638994 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:10.639003 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:10.639012 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:10.639021 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:10.639030 | orchestrator | 2026-04-09 00:38:10.639039 | orchestrator | 2026-04-09 00:38:10.639047 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:38:10.639056 | orchestrator | Thursday 09 April 2026 00:38:10 +0000 (0:00:11.435) 0:00:11.666 ******** 2026-04-09 00:38:10.639064 | orchestrator | =============================================================================== 2026-04-09 00:38:10.639073 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.44s 2026-04-09 00:38:10.762526 | orchestrator | + osism apply hddtemp 2026-04-09 00:38:21.886754 | orchestrator | 2026-04-09 00:38:21 | INFO  | Prepare task for execution of hddtemp. 2026-04-09 00:38:21.958215 | orchestrator | 2026-04-09 00:38:21 | INFO  | Task fa6bc361-6ad5-4883-bb51-0a02dbbfdbde (hddtemp) was prepared for execution. 2026-04-09 00:38:21.958350 | orchestrator | 2026-04-09 00:38:21 | INFO  | It takes a moment until task fa6bc361-6ad5-4883-bb51-0a02dbbfdbde (hddtemp) has been started and output is visible here. 2026-04-09 00:38:48.341071 | orchestrator | 2026-04-09 00:38:48.341177 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-09 00:38:48.341194 | orchestrator | 2026-04-09 00:38:48.341206 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-09 00:38:48.341218 | orchestrator | Thursday 09 April 2026 00:38:25 +0000 (0:00:00.298) 0:00:00.298 ******** 2026-04-09 00:38:48.341307 | orchestrator | ok: [testbed-manager] 2026-04-09 00:38:48.341321 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:38:48.341332 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:38:48.341343 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:38:48.341354 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:38:48.341365 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:38:48.341392 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:38:48.341403 | orchestrator | 2026-04-09 00:38:48.341414 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-09 00:38:48.341425 | orchestrator | Thursday 09 April 2026 00:38:25 +0000 (0:00:00.547) 0:00:00.846 ******** 2026-04-09 00:38:48.341438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:38:48.341452 | orchestrator | 2026-04-09 00:38:48.341463 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-09 00:38:48.341475 | orchestrator | Thursday 09 April 2026 00:38:26 +0000 (0:00:01.036) 0:00:01.883 ******** 2026-04-09 00:38:48.341485 | orchestrator | ok: [testbed-manager] 2026-04-09 00:38:48.341496 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:38:48.341507 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:38:48.341518 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:38:48.341529 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:38:48.341540 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:38:48.341550 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:38:48.341561 | orchestrator | 2026-04-09 00:38:48.341572 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-09 00:38:48.341583 | orchestrator | Thursday 09 April 2026 00:38:29 +0000 (0:00:02.438) 0:00:04.322 ******** 2026-04-09 00:38:48.341594 | orchestrator | changed: [testbed-manager] 2026-04-09 00:38:48.341606 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:38:48.341642 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:38:48.341655 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:38:48.341668 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:38:48.341681 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:38:48.341694 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:38:48.341722 | orchestrator | 2026-04-09 00:38:48.341735 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-09 00:38:48.341759 | orchestrator | Thursday 09 April 2026 00:38:30 +0000 (0:00:00.926) 0:00:05.249 ******** 2026-04-09 00:38:48.341772 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:38:48.341784 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:38:48.341797 | orchestrator | ok: [testbed-manager] 2026-04-09 00:38:48.341810 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:38:48.341823 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:38:48.341835 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:38:48.341848 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:38:48.341860 | orchestrator | 2026-04-09 00:38:48.341873 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-09 00:38:48.341886 | orchestrator | Thursday 09 April 2026 00:38:31 +0000 (0:00:01.304) 0:00:06.553 ******** 2026-04-09 00:38:48.341899 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:38:48.341911 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:38:48.341923 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:38:48.341936 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:38:48.341949 | orchestrator | changed: [testbed-manager] 2026-04-09 00:38:48.341961 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:38:48.341971 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:38:48.341982 | orchestrator | 2026-04-09 00:38:48.341993 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-09 00:38:48.342004 | orchestrator | Thursday 09 April 2026 00:38:31 +0000 (0:00:00.589) 0:00:07.143 ******** 2026-04-09 00:38:48.342070 | orchestrator | changed: [testbed-manager] 2026-04-09 00:38:48.342085 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:38:48.342096 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:38:48.342106 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:38:48.342117 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:38:48.342127 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:38:48.342139 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:38:48.342149 | orchestrator | 2026-04-09 00:38:48.342161 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-09 00:38:48.342171 | orchestrator | Thursday 09 April 2026 00:38:45 +0000 (0:00:13.334) 0:00:20.477 ******** 2026-04-09 00:38:48.342182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:38:48.342194 | orchestrator | 2026-04-09 00:38:48.342204 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-09 00:38:48.342215 | orchestrator | Thursday 09 April 2026 00:38:46 +0000 (0:00:01.058) 0:00:21.535 ******** 2026-04-09 00:38:48.342247 | orchestrator | changed: [testbed-manager] 2026-04-09 00:38:48.342258 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:38:48.342269 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:38:48.342279 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:38:48.342290 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:38:48.342300 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:38:48.342311 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:38:48.342322 | orchestrator | 2026-04-09 00:38:48.342332 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:38:48.342343 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:38:48.342376 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:38:48.342397 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:38:48.342409 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:38:48.342426 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:38:48.342437 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:38:48.342447 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:38:48.342458 | orchestrator | 2026-04-09 00:38:48.342468 | orchestrator | 2026-04-09 00:38:48.342479 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:38:48.342490 | orchestrator | Thursday 09 April 2026 00:38:48 +0000 (0:00:01.763) 0:00:23.300 ******** 2026-04-09 00:38:48.342501 | orchestrator | =============================================================================== 2026-04-09 00:38:48.342512 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.33s 2026-04-09 00:38:48.342522 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.44s 2026-04-09 00:38:48.342533 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.76s 2026-04-09 00:38:48.342543 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.30s 2026-04-09 00:38:48.342554 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.06s 2026-04-09 00:38:48.342564 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.04s 2026-04-09 00:38:48.342575 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.93s 2026-04-09 00:38:48.342586 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.59s 2026-04-09 00:38:48.342596 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.55s 2026-04-09 00:38:48.451729 | orchestrator | ++ semver latest 7.1.1 2026-04-09 00:38:48.497790 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:38:48.497889 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 00:38:48.497904 | orchestrator | + sudo systemctl restart manager.service 2026-04-09 00:39:02.255077 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 00:39:02.255180 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-09 00:39:02.255197 | orchestrator | + local max_attempts=60 2026-04-09 00:39:02.255209 | orchestrator | + local name=ceph-ansible 2026-04-09 00:39:02.255220 | orchestrator | + local attempt_num=1 2026-04-09 00:39:02.255280 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:02.288504 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:02.288589 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:02.288602 | orchestrator | + sleep 5 2026-04-09 00:39:07.291128 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:07.318842 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:07.318935 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:07.318950 | orchestrator | + sleep 5 2026-04-09 00:39:12.321312 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:12.361330 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:12.361419 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:12.361434 | orchestrator | + sleep 5 2026-04-09 00:39:17.366651 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:17.401412 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:17.401500 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:17.401514 | orchestrator | + sleep 5 2026-04-09 00:39:22.405271 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:22.442529 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:22.442603 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:22.442615 | orchestrator | + sleep 5 2026-04-09 00:39:27.448035 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:27.483620 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:27.483708 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:27.483721 | orchestrator | + sleep 5 2026-04-09 00:39:32.487787 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:32.521683 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:32.521765 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:32.521779 | orchestrator | + sleep 5 2026-04-09 00:39:37.524603 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:37.550117 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:37.550210 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:37.550226 | orchestrator | + sleep 5 2026-04-09 00:39:42.553505 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:42.591615 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:42.591715 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:42.591731 | orchestrator | + sleep 5 2026-04-09 00:39:47.595949 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:47.633758 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:47.633840 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:47.633852 | orchestrator | + sleep 5 2026-04-09 00:39:52.638766 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:52.674833 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:52.674927 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:52.674943 | orchestrator | + sleep 5 2026-04-09 00:39:57.680003 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:39:57.713254 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:39:57.713440 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:39:57.713456 | orchestrator | + sleep 5 2026-04-09 00:40:02.716771 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:40:02.744673 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:02.744852 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:40:02.744882 | orchestrator | + sleep 5 2026-04-09 00:40:07.749619 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:40:07.788028 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:07.788106 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-09 00:40:07.788208 | orchestrator | + local max_attempts=60 2026-04-09 00:40:07.788221 | orchestrator | + local name=kolla-ansible 2026-04-09 00:40:07.788242 | orchestrator | + local attempt_num=1 2026-04-09 00:40:07.789053 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-09 00:40:07.823117 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:07.823191 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-09 00:40:07.823204 | orchestrator | + local max_attempts=60 2026-04-09 00:40:07.823215 | orchestrator | + local name=osism-ansible 2026-04-09 00:40:07.823226 | orchestrator | + local attempt_num=1 2026-04-09 00:40:07.824121 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-09 00:40:07.855760 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:40:07.855819 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 00:40:07.855832 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-09 00:40:08.003229 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-09 00:40:08.148677 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-09 00:40:08.276032 | orchestrator | ARA in osism-ansible already disabled. 2026-04-09 00:40:08.390556 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-09 00:40:08.390972 | orchestrator | + osism apply gather-facts 2026-04-09 00:40:19.736380 | orchestrator | 2026-04-09 00:40:19 | INFO  | Prepare task for execution of gather-facts. 2026-04-09 00:40:19.812831 | orchestrator | 2026-04-09 00:40:19 | INFO  | Task 039dbc90-9d72-4fdb-b423-a44eabaa8c38 (gather-facts) was prepared for execution. 2026-04-09 00:40:19.812948 | orchestrator | 2026-04-09 00:40:19 | INFO  | It takes a moment until task 039dbc90-9d72-4fdb-b423-a44eabaa8c38 (gather-facts) has been started and output is visible here. 2026-04-09 00:40:32.313187 | orchestrator | 2026-04-09 00:40:32.313388 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:40:32.313448 | orchestrator | 2026-04-09 00:40:32.313471 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:40:32.313492 | orchestrator | Thursday 09 April 2026 00:40:22 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-04-09 00:40:32.313510 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:40:32.313551 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:40:32.313570 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:40:32.313602 | orchestrator | ok: [testbed-manager] 2026-04-09 00:40:32.313615 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:40:32.313626 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:40:32.313637 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:40:32.313647 | orchestrator | 2026-04-09 00:40:32.313659 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 00:40:32.313670 | orchestrator | 2026-04-09 00:40:32.313681 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 00:40:32.313693 | orchestrator | Thursday 09 April 2026 00:40:31 +0000 (0:00:08.473) 0:00:08.758 ******** 2026-04-09 00:40:32.313704 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:40:32.313715 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:40:32.313727 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:40:32.313739 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:40:32.313752 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:32.313768 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:32.313787 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:32.313815 | orchestrator | 2026-04-09 00:40:32.313835 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:40:32.313854 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:32.313874 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:32.313893 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:32.313912 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:32.313930 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:32.313949 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:32.313968 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:40:32.313985 | orchestrator | 2026-04-09 00:40:32.314002 | orchestrator | 2026-04-09 00:40:32.314096 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:40:32.314119 | orchestrator | Thursday 09 April 2026 00:40:32 +0000 (0:00:00.631) 0:00:09.390 ******** 2026-04-09 00:40:32.314138 | orchestrator | =============================================================================== 2026-04-09 00:40:32.314158 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.47s 2026-04-09 00:40:32.314177 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-04-09 00:40:32.488126 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-09 00:40:32.505855 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-09 00:40:32.515521 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-09 00:40:32.536047 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-09 00:40:32.552838 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-09 00:40:32.564321 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-09 00:40:32.579579 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-09 00:40:32.593737 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-09 00:40:32.612457 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-09 00:40:32.628622 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-09 00:40:32.643503 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-09 00:40:32.661318 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-09 00:40:32.678640 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-09 00:40:32.698197 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-09 00:40:32.718561 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-09 00:40:32.740391 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-09 00:40:32.759201 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-09 00:40:32.780235 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-09 00:40:32.796356 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-09 00:40:32.817619 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-09 00:40:32.832616 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-09 00:40:32.850507 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-09 00:40:32.871470 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-09 00:40:32.889532 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-09 00:40:33.319620 | orchestrator | ok: Runtime: 0:23:05.759897 2026-04-09 00:40:33.426192 | 2026-04-09 00:40:33.426341 | TASK [Deploy services] 2026-04-09 00:40:33.961482 | orchestrator | skipping: Conditional result was False 2026-04-09 00:40:33.972028 | 2026-04-09 00:40:33.972189 | TASK [Deploy in a nutshell] 2026-04-09 00:40:34.698597 | orchestrator | + set -e 2026-04-09 00:40:34.698840 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 00:40:34.698880 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 00:40:34.698909 | orchestrator | ++ INTERACTIVE=false 2026-04-09 00:40:34.698929 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 00:40:34.698949 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 00:40:34.698971 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 00:40:34.699034 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 00:40:34.699078 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 00:40:34.699102 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 00:40:34.699127 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 00:40:34.699148 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 00:40:34.699175 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 00:40:34.699194 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 00:40:34.699224 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 00:40:34.699242 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 00:40:34.699265 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 00:40:34.699354 | orchestrator | ++ export ARA=false 2026-04-09 00:40:34.699374 | orchestrator | ++ ARA=false 2026-04-09 00:40:34.699391 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 00:40:34.699412 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 00:40:34.699429 | orchestrator | ++ export TEMPEST=true 2026-04-09 00:40:34.699446 | orchestrator | ++ TEMPEST=true 2026-04-09 00:40:34.699464 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 00:40:34.699481 | orchestrator | ++ IS_ZUUL=true 2026-04-09 00:40:34.699500 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 00:40:34.699518 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 00:40:34.699557 | orchestrator | 2026-04-09 00:40:34.699577 | orchestrator | # PULL IMAGES 2026-04-09 00:40:34.699594 | orchestrator | 2026-04-09 00:40:34.699612 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 00:40:34.699630 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 00:40:34.699649 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 00:40:34.699669 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 00:40:34.699686 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 00:40:34.699705 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 00:40:34.699723 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 00:40:34.699755 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 00:40:34.699775 | orchestrator | + echo 2026-04-09 00:40:34.699793 | orchestrator | + echo '# PULL IMAGES' 2026-04-09 00:40:34.699809 | orchestrator | + echo 2026-04-09 00:40:34.699827 | orchestrator | ++ semver latest 7.0.0 2026-04-09 00:40:34.747700 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 00:40:34.747832 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 00:40:34.747863 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-09 00:40:36.058884 | orchestrator | 2026-04-09 00:40:36 | INFO  | Trying to run play pull-images in environment custom 2026-04-09 00:40:46.168182 | orchestrator | 2026-04-09 00:40:46 | INFO  | Prepare task for execution of pull-images. 2026-04-09 00:40:46.239159 | orchestrator | 2026-04-09 00:40:46 | INFO  | Task 777a4f08-7537-40a0-8073-bae118d71b6f (pull-images) was prepared for execution. 2026-04-09 00:40:46.239238 | orchestrator | 2026-04-09 00:40:46 | INFO  | Task 777a4f08-7537-40a0-8073-bae118d71b6f is running in background. No more output. Check ARA for logs. 2026-04-09 00:40:47.614159 | orchestrator | 2026-04-09 00:40:47 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-09 00:40:57.740351 | orchestrator | 2026-04-09 00:40:57 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-09 00:40:57.823336 | orchestrator | 2026-04-09 00:40:57 | INFO  | Task 9c497716-863e-4797-a829-ed4e0bff281f (wipe-partitions) was prepared for execution. 2026-04-09 00:40:57.823646 | orchestrator | 2026-04-09 00:40:57 | INFO  | It takes a moment until task 9c497716-863e-4797-a829-ed4e0bff281f (wipe-partitions) has been started and output is visible here. 2026-04-09 00:41:09.358512 | orchestrator | 2026-04-09 00:41:09.358599 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-09 00:41:09.358607 | orchestrator | 2026-04-09 00:41:09.358612 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-09 00:41:09.358620 | orchestrator | Thursday 09 April 2026 00:41:01 +0000 (0:00:00.147) 0:00:00.147 ******** 2026-04-09 00:41:09.358643 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:41:09.358649 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:41:09.358654 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:41:09.358660 | orchestrator | 2026-04-09 00:41:09.358668 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-09 00:41:09.358676 | orchestrator | Thursday 09 April 2026 00:41:02 +0000 (0:00:00.932) 0:00:01.079 ******** 2026-04-09 00:41:09.358686 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:09.358694 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:09.358701 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:41:09.358708 | orchestrator | 2026-04-09 00:41:09.358716 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-09 00:41:09.358737 | orchestrator | Thursday 09 April 2026 00:41:02 +0000 (0:00:00.217) 0:00:01.297 ******** 2026-04-09 00:41:09.358745 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:09.358754 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:41:09.358761 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:41:09.358768 | orchestrator | 2026-04-09 00:41:09.358775 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-09 00:41:09.358780 | orchestrator | Thursday 09 April 2026 00:41:03 +0000 (0:00:00.516) 0:00:01.814 ******** 2026-04-09 00:41:09.358784 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:09.358789 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:09.358793 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:41:09.358797 | orchestrator | 2026-04-09 00:41:09.358802 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-09 00:41:09.358807 | orchestrator | Thursday 09 April 2026 00:41:03 +0000 (0:00:00.212) 0:00:02.026 ******** 2026-04-09 00:41:09.358811 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 00:41:09.358818 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 00:41:09.358824 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 00:41:09.358831 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 00:41:09.358837 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 00:41:09.358844 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 00:41:09.358851 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 00:41:09.358859 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 00:41:09.358865 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 00:41:09.358874 | orchestrator | 2026-04-09 00:41:09.358881 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-09 00:41:09.358888 | orchestrator | Thursday 09 April 2026 00:41:04 +0000 (0:00:01.331) 0:00:03.358 ******** 2026-04-09 00:41:09.358895 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 00:41:09.358903 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 00:41:09.358910 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 00:41:09.358917 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 00:41:09.358924 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 00:41:09.358931 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 00:41:09.358939 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 00:41:09.358943 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 00:41:09.358948 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 00:41:09.358952 | orchestrator | 2026-04-09 00:41:09.358961 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-09 00:41:09.358966 | orchestrator | Thursday 09 April 2026 00:41:05 +0000 (0:00:01.314) 0:00:04.672 ******** 2026-04-09 00:41:09.358970 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 00:41:09.358975 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 00:41:09.358979 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 00:41:09.358984 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 00:41:09.358994 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 00:41:09.358999 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 00:41:09.359003 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 00:41:09.359007 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 00:41:09.359011 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 00:41:09.359016 | orchestrator | 2026-04-09 00:41:09.359020 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-09 00:41:09.359024 | orchestrator | Thursday 09 April 2026 00:41:07 +0000 (0:00:02.028) 0:00:06.701 ******** 2026-04-09 00:41:09.359029 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:41:09.359033 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:41:09.359037 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:41:09.359041 | orchestrator | 2026-04-09 00:41:09.359046 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-09 00:41:09.359051 | orchestrator | Thursday 09 April 2026 00:41:08 +0000 (0:00:00.560) 0:00:07.261 ******** 2026-04-09 00:41:09.359056 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:41:09.359061 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:41:09.359066 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:41:09.359072 | orchestrator | 2026-04-09 00:41:09.359077 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:41:09.359084 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:09.359091 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:09.359110 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:09.359115 | orchestrator | 2026-04-09 00:41:09.359120 | orchestrator | 2026-04-09 00:41:09.359126 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:41:09.359131 | orchestrator | Thursday 09 April 2026 00:41:09 +0000 (0:00:00.687) 0:00:07.949 ******** 2026-04-09 00:41:09.359136 | orchestrator | =============================================================================== 2026-04-09 00:41:09.359142 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.03s 2026-04-09 00:41:09.359147 | orchestrator | Check device availability ----------------------------------------------- 1.33s 2026-04-09 00:41:09.359152 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.31s 2026-04-09 00:41:09.359158 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.93s 2026-04-09 00:41:09.359163 | orchestrator | Request device events from the kernel ----------------------------------- 0.69s 2026-04-09 00:41:09.359169 | orchestrator | Reload udev rules ------------------------------------------------------- 0.56s 2026-04-09 00:41:09.359174 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.52s 2026-04-09 00:41:09.359179 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2026-04-09 00:41:09.359184 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.21s 2026-04-09 00:41:20.641504 | orchestrator | 2026-04-09 00:41:20 | INFO  | Prepare task for execution of facts. 2026-04-09 00:41:20.700615 | orchestrator | 2026-04-09 00:41:20 | INFO  | Task ea06fbc3-dec5-438b-a5f6-3285c0e2d39f (facts) was prepared for execution. 2026-04-09 00:41:20.700735 | orchestrator | 2026-04-09 00:41:20 | INFO  | It takes a moment until task ea06fbc3-dec5-438b-a5f6-3285c0e2d39f (facts) has been started and output is visible here. 2026-04-09 00:41:31.412227 | orchestrator | 2026-04-09 00:41:31.412298 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 00:41:31.412306 | orchestrator | 2026-04-09 00:41:31.412400 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 00:41:31.412409 | orchestrator | Thursday 09 April 2026 00:41:23 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-04-09 00:41:31.412416 | orchestrator | ok: [testbed-manager] 2026-04-09 00:41:31.412423 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:41:31.412429 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:41:31.412435 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:41:31.412441 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:31.412447 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:41:31.412453 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:41:31.412459 | orchestrator | 2026-04-09 00:41:31.412465 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 00:41:31.412471 | orchestrator | Thursday 09 April 2026 00:41:24 +0000 (0:00:01.285) 0:00:01.563 ******** 2026-04-09 00:41:31.412478 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:41:31.412485 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:41:31.412492 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:41:31.412498 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:41:31.412504 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:31.412510 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:31.412516 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:41:31.412523 | orchestrator | 2026-04-09 00:41:31.412529 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:41:31.412551 | orchestrator | 2026-04-09 00:41:31.412558 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:41:31.412564 | orchestrator | Thursday 09 April 2026 00:41:26 +0000 (0:00:01.096) 0:00:02.660 ******** 2026-04-09 00:41:31.412571 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:41:31.412576 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:41:31.412583 | orchestrator | ok: [testbed-manager] 2026-04-09 00:41:31.412589 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:41:31.412595 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:31.412601 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:41:31.412607 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:41:31.412612 | orchestrator | 2026-04-09 00:41:31.412619 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 00:41:31.412625 | orchestrator | 2026-04-09 00:41:31.412631 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 00:41:31.412637 | orchestrator | Thursday 09 April 2026 00:41:30 +0000 (0:00:04.730) 0:00:07.390 ******** 2026-04-09 00:41:31.412643 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:41:31.412647 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:41:31.412651 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:41:31.412655 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:41:31.412659 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:31.412662 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:31.412666 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:41:31.412670 | orchestrator | 2026-04-09 00:41:31.412674 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:41:31.412678 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:31.412683 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:31.412687 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:31.412691 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:31.412695 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:31.412706 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:31.412710 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:41:31.412714 | orchestrator | 2026-04-09 00:41:31.412718 | orchestrator | 2026-04-09 00:41:31.412721 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:41:31.412725 | orchestrator | Thursday 09 April 2026 00:41:31 +0000 (0:00:00.431) 0:00:07.822 ******** 2026-04-09 00:41:31.412729 | orchestrator | =============================================================================== 2026-04-09 00:41:31.412733 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.73s 2026-04-09 00:41:31.412737 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-04-09 00:41:31.412740 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2026-04-09 00:41:31.412744 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.43s 2026-04-09 00:41:32.720381 | orchestrator | 2026-04-09 00:41:32 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-09 00:41:32.776811 | orchestrator | 2026-04-09 00:41:32 | INFO  | Task 5b3ffb62-f5cd-4309-9239-b1df6324cc49 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-09 00:41:32.776921 | orchestrator | 2026-04-09 00:41:32 | INFO  | It takes a moment until task 5b3ffb62-f5cd-4309-9239-b1df6324cc49 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-09 00:41:44.145126 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:41:44.145217 | orchestrator | 2.16.14 2026-04-09 00:41:44.145225 | orchestrator | 2026-04-09 00:41:44.145230 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 00:41:44.145235 | orchestrator | 2026-04-09 00:41:44.145239 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:41:44.145243 | orchestrator | Thursday 09 April 2026 00:41:37 +0000 (0:00:00.331) 0:00:00.331 ******** 2026-04-09 00:41:44.145248 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 00:41:44.145252 | orchestrator | 2026-04-09 00:41:44.145256 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:41:44.145260 | orchestrator | Thursday 09 April 2026 00:41:37 +0000 (0:00:00.240) 0:00:00.571 ******** 2026-04-09 00:41:44.145264 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:44.145268 | orchestrator | 2026-04-09 00:41:44.145272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145276 | orchestrator | Thursday 09 April 2026 00:41:37 +0000 (0:00:00.222) 0:00:00.793 ******** 2026-04-09 00:41:44.145286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:41:44.145290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:41:44.145294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:41:44.145298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:41:44.145301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:41:44.145305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:41:44.145359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:41:44.145364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:41:44.145368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-09 00:41:44.145372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:41:44.145391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:41:44.145395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:41:44.145399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:41:44.145403 | orchestrator | 2026-04-09 00:41:44.145407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145411 | orchestrator | Thursday 09 April 2026 00:41:37 +0000 (0:00:00.349) 0:00:01.142 ******** 2026-04-09 00:41:44.145414 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145419 | orchestrator | 2026-04-09 00:41:44.145423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145426 | orchestrator | Thursday 09 April 2026 00:41:38 +0000 (0:00:00.483) 0:00:01.626 ******** 2026-04-09 00:41:44.145430 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145434 | orchestrator | 2026-04-09 00:41:44.145438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145446 | orchestrator | Thursday 09 April 2026 00:41:38 +0000 (0:00:00.191) 0:00:01.817 ******** 2026-04-09 00:41:44.145450 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145454 | orchestrator | 2026-04-09 00:41:44.145457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145461 | orchestrator | Thursday 09 April 2026 00:41:38 +0000 (0:00:00.186) 0:00:02.004 ******** 2026-04-09 00:41:44.145465 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145469 | orchestrator | 2026-04-09 00:41:44.145473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145477 | orchestrator | Thursday 09 April 2026 00:41:38 +0000 (0:00:00.165) 0:00:02.170 ******** 2026-04-09 00:41:44.145481 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145484 | orchestrator | 2026-04-09 00:41:44.145488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145492 | orchestrator | Thursday 09 April 2026 00:41:39 +0000 (0:00:00.227) 0:00:02.398 ******** 2026-04-09 00:41:44.145496 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145499 | orchestrator | 2026-04-09 00:41:44.145503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145507 | orchestrator | Thursday 09 April 2026 00:41:39 +0000 (0:00:00.196) 0:00:02.594 ******** 2026-04-09 00:41:44.145511 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145515 | orchestrator | 2026-04-09 00:41:44.145518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145531 | orchestrator | Thursday 09 April 2026 00:41:39 +0000 (0:00:00.189) 0:00:02.784 ******** 2026-04-09 00:41:44.145535 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145545 | orchestrator | 2026-04-09 00:41:44.145549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145553 | orchestrator | Thursday 09 April 2026 00:41:39 +0000 (0:00:00.188) 0:00:02.972 ******** 2026-04-09 00:41:44.145557 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc) 2026-04-09 00:41:44.145562 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc) 2026-04-09 00:41:44.145566 | orchestrator | 2026-04-09 00:41:44.145570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145586 | orchestrator | Thursday 09 April 2026 00:41:40 +0000 (0:00:00.390) 0:00:03.363 ******** 2026-04-09 00:41:44.145590 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7) 2026-04-09 00:41:44.145594 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7) 2026-04-09 00:41:44.145598 | orchestrator | 2026-04-09 00:41:44.145605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145613 | orchestrator | Thursday 09 April 2026 00:41:40 +0000 (0:00:00.412) 0:00:03.776 ******** 2026-04-09 00:41:44.145617 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1) 2026-04-09 00:41:44.145621 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1) 2026-04-09 00:41:44.145625 | orchestrator | 2026-04-09 00:41:44.145629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145633 | orchestrator | Thursday 09 April 2026 00:41:41 +0000 (0:00:00.621) 0:00:04.397 ******** 2026-04-09 00:41:44.145637 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a) 2026-04-09 00:41:44.145641 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a) 2026-04-09 00:41:44.145645 | orchestrator | 2026-04-09 00:41:44.145648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:44.145652 | orchestrator | Thursday 09 April 2026 00:41:41 +0000 (0:00:00.623) 0:00:05.021 ******** 2026-04-09 00:41:44.145656 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:41:44.145660 | orchestrator | 2026-04-09 00:41:44.145664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.145668 | orchestrator | Thursday 09 April 2026 00:41:42 +0000 (0:00:00.718) 0:00:05.740 ******** 2026-04-09 00:41:44.145672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:41:44.145676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:41:44.145680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:41:44.145684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:41:44.145688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:41:44.145692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:41:44.145696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:41:44.145700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:41:44.145704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-09 00:41:44.145707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:41:44.145712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:41:44.145715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:41:44.145719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:41:44.145723 | orchestrator | 2026-04-09 00:41:44.145727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.145731 | orchestrator | Thursday 09 April 2026 00:41:42 +0000 (0:00:00.359) 0:00:06.099 ******** 2026-04-09 00:41:44.145735 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145739 | orchestrator | 2026-04-09 00:41:44.145743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.145747 | orchestrator | Thursday 09 April 2026 00:41:42 +0000 (0:00:00.198) 0:00:06.297 ******** 2026-04-09 00:41:44.145751 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145755 | orchestrator | 2026-04-09 00:41:44.145759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.145763 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:00.197) 0:00:06.495 ******** 2026-04-09 00:41:44.145767 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145774 | orchestrator | 2026-04-09 00:41:44.145778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.145782 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:00.207) 0:00:06.702 ******** 2026-04-09 00:41:44.145786 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145790 | orchestrator | 2026-04-09 00:41:44.145794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.145797 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:00.191) 0:00:06.894 ******** 2026-04-09 00:41:44.145801 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145805 | orchestrator | 2026-04-09 00:41:44.145809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.145813 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:00.197) 0:00:07.092 ******** 2026-04-09 00:41:44.145817 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145821 | orchestrator | 2026-04-09 00:41:44.145825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.145829 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:00.184) 0:00:07.276 ******** 2026-04-09 00:41:44.145833 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.145837 | orchestrator | 2026-04-09 00:41:44.145844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:50.969883 | orchestrator | Thursday 09 April 2026 00:41:44 +0000 (0:00:00.186) 0:00:07.463 ******** 2026-04-09 00:41:50.969990 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.970008 | orchestrator | 2026-04-09 00:41:50.970071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:50.970084 | orchestrator | Thursday 09 April 2026 00:41:44 +0000 (0:00:00.193) 0:00:07.656 ******** 2026-04-09 00:41:50.970096 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-09 00:41:50.970107 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-09 00:41:50.970118 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-09 00:41:50.970129 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-09 00:41:50.970140 | orchestrator | 2026-04-09 00:41:50.970151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:50.970181 | orchestrator | Thursday 09 April 2026 00:41:45 +0000 (0:00:00.961) 0:00:08.617 ******** 2026-04-09 00:41:50.970193 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.970204 | orchestrator | 2026-04-09 00:41:50.970214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:50.970225 | orchestrator | Thursday 09 April 2026 00:41:45 +0000 (0:00:00.205) 0:00:08.823 ******** 2026-04-09 00:41:50.970236 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.970247 | orchestrator | 2026-04-09 00:41:50.970258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:50.970269 | orchestrator | Thursday 09 April 2026 00:41:45 +0000 (0:00:00.186) 0:00:09.010 ******** 2026-04-09 00:41:50.970279 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.970290 | orchestrator | 2026-04-09 00:41:50.970301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:50.970312 | orchestrator | Thursday 09 April 2026 00:41:45 +0000 (0:00:00.207) 0:00:09.217 ******** 2026-04-09 00:41:50.970353 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.970366 | orchestrator | 2026-04-09 00:41:50.970377 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 00:41:50.970387 | orchestrator | Thursday 09 April 2026 00:41:46 +0000 (0:00:00.185) 0:00:09.403 ******** 2026-04-09 00:41:50.970398 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-09 00:41:50.970409 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-09 00:41:50.970422 | orchestrator | 2026-04-09 00:41:50.970435 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 00:41:50.970448 | orchestrator | Thursday 09 April 2026 00:41:46 +0000 (0:00:00.148) 0:00:09.552 ******** 2026-04-09 00:41:50.970485 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.970498 | orchestrator | 2026-04-09 00:41:50.970511 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 00:41:50.970524 | orchestrator | Thursday 09 April 2026 00:41:46 +0000 (0:00:00.113) 0:00:09.666 ******** 2026-04-09 00:41:50.970536 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.970549 | orchestrator | 2026-04-09 00:41:50.970561 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 00:41:50.970574 | orchestrator | Thursday 09 April 2026 00:41:46 +0000 (0:00:00.108) 0:00:09.774 ******** 2026-04-09 00:41:50.970586 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.970598 | orchestrator | 2026-04-09 00:41:50.970611 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 00:41:50.970623 | orchestrator | Thursday 09 April 2026 00:41:46 +0000 (0:00:00.107) 0:00:09.881 ******** 2026-04-09 00:41:50.970636 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:50.970649 | orchestrator | 2026-04-09 00:41:50.970661 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 00:41:50.970674 | orchestrator | Thursday 09 April 2026 00:41:46 +0000 (0:00:00.107) 0:00:09.989 ******** 2026-04-09 00:41:50.970688 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0ecce907-b02d-5708-a2ce-6926a186870f'}}) 2026-04-09 00:41:50.970701 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b063fe53-4e4e-551f-8a45-331436b07c8b'}}) 2026-04-09 00:41:50.970714 | orchestrator | 2026-04-09 00:41:50.970726 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 00:41:50.970739 | orchestrator | Thursday 09 April 2026 00:41:46 +0000 (0:00:00.139) 0:00:10.129 ******** 2026-04-09 00:41:50.970753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0ecce907-b02d-5708-a2ce-6926a186870f'}})  2026-04-09 00:41:50.970775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b063fe53-4e4e-551f-8a45-331436b07c8b'}})  2026-04-09 00:41:50.970792 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.970803 | orchestrator | 2026-04-09 00:41:50.970814 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 00:41:50.970824 | orchestrator | Thursday 09 April 2026 00:41:46 +0000 (0:00:00.142) 0:00:10.271 ******** 2026-04-09 00:41:50.970835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0ecce907-b02d-5708-a2ce-6926a186870f'}})  2026-04-09 00:41:50.970846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b063fe53-4e4e-551f-8a45-331436b07c8b'}})  2026-04-09 00:41:50.970857 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.970868 | orchestrator | 2026-04-09 00:41:50.970887 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 00:41:50.970905 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.258) 0:00:10.530 ******** 2026-04-09 00:41:50.970926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0ecce907-b02d-5708-a2ce-6926a186870f'}})  2026-04-09 00:41:50.970976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b063fe53-4e4e-551f-8a45-331436b07c8b'}})  2026-04-09 00:41:50.970996 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.971013 | orchestrator | 2026-04-09 00:41:50.971030 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 00:41:50.971048 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.137) 0:00:10.668 ******** 2026-04-09 00:41:50.971066 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:50.971083 | orchestrator | 2026-04-09 00:41:50.971102 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 00:41:50.971120 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.129) 0:00:10.797 ******** 2026-04-09 00:41:50.971138 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:50.971170 | orchestrator | 2026-04-09 00:41:50.971181 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 00:41:50.971192 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.135) 0:00:10.933 ******** 2026-04-09 00:41:50.971203 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.971214 | orchestrator | 2026-04-09 00:41:50.971225 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 00:41:50.971235 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.118) 0:00:11.051 ******** 2026-04-09 00:41:50.971246 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.971257 | orchestrator | 2026-04-09 00:41:50.971267 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 00:41:50.971278 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.119) 0:00:11.171 ******** 2026-04-09 00:41:50.971288 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.971299 | orchestrator | 2026-04-09 00:41:50.971309 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 00:41:50.971320 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.114) 0:00:11.286 ******** 2026-04-09 00:41:50.971360 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:41:50.971371 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:41:50.971382 | orchestrator |  "sdb": { 2026-04-09 00:41:50.971394 | orchestrator |  "osd_lvm_uuid": "0ecce907-b02d-5708-a2ce-6926a186870f" 2026-04-09 00:41:50.971405 | orchestrator |  }, 2026-04-09 00:41:50.971416 | orchestrator |  "sdc": { 2026-04-09 00:41:50.971427 | orchestrator |  "osd_lvm_uuid": "b063fe53-4e4e-551f-8a45-331436b07c8b" 2026-04-09 00:41:50.971438 | orchestrator |  } 2026-04-09 00:41:50.971449 | orchestrator |  } 2026-04-09 00:41:50.971460 | orchestrator | } 2026-04-09 00:41:50.971471 | orchestrator | 2026-04-09 00:41:50.971482 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 00:41:50.971493 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.126) 0:00:11.412 ******** 2026-04-09 00:41:50.971504 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.971514 | orchestrator | 2026-04-09 00:41:50.971525 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 00:41:50.971536 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.119) 0:00:11.531 ******** 2026-04-09 00:41:50.971546 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.971557 | orchestrator | 2026-04-09 00:41:50.971568 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 00:41:50.971579 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.136) 0:00:11.668 ******** 2026-04-09 00:41:50.971590 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.971601 | orchestrator | 2026-04-09 00:41:50.971611 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 00:41:50.971622 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.118) 0:00:11.786 ******** 2026-04-09 00:41:50.971633 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 00:41:50.971643 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 00:41:50.971654 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:41:50.971665 | orchestrator |  "sdb": { 2026-04-09 00:41:50.971676 | orchestrator |  "osd_lvm_uuid": "0ecce907-b02d-5708-a2ce-6926a186870f" 2026-04-09 00:41:50.971687 | orchestrator |  }, 2026-04-09 00:41:50.971698 | orchestrator |  "sdc": { 2026-04-09 00:41:50.971709 | orchestrator |  "osd_lvm_uuid": "b063fe53-4e4e-551f-8a45-331436b07c8b" 2026-04-09 00:41:50.971719 | orchestrator |  } 2026-04-09 00:41:50.971730 | orchestrator |  }, 2026-04-09 00:41:50.971741 | orchestrator |  "lvm_volumes": [ 2026-04-09 00:41:50.971752 | orchestrator |  { 2026-04-09 00:41:50.971763 | orchestrator |  "data": "osd-block-0ecce907-b02d-5708-a2ce-6926a186870f", 2026-04-09 00:41:50.971774 | orchestrator |  "data_vg": "ceph-0ecce907-b02d-5708-a2ce-6926a186870f" 2026-04-09 00:41:50.971792 | orchestrator |  }, 2026-04-09 00:41:50.971803 | orchestrator |  { 2026-04-09 00:41:50.971814 | orchestrator |  "data": "osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b", 2026-04-09 00:41:50.971825 | orchestrator |  "data_vg": "ceph-b063fe53-4e4e-551f-8a45-331436b07c8b" 2026-04-09 00:41:50.971835 | orchestrator |  } 2026-04-09 00:41:50.971846 | orchestrator |  ] 2026-04-09 00:41:50.971857 | orchestrator |  } 2026-04-09 00:41:50.971868 | orchestrator | } 2026-04-09 00:41:50.971879 | orchestrator | 2026-04-09 00:41:50.971890 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 00:41:50.971900 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.167) 0:00:11.954 ******** 2026-04-09 00:41:50.971911 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 00:41:50.971922 | orchestrator | 2026-04-09 00:41:50.972018 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 00:41:50.972032 | orchestrator | 2026-04-09 00:41:50.972043 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:41:50.972054 | orchestrator | Thursday 09 April 2026 00:41:50 +0000 (0:00:01.898) 0:00:13.852 ******** 2026-04-09 00:41:50.972065 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 00:41:50.972076 | orchestrator | 2026-04-09 00:41:50.972087 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:41:50.972103 | orchestrator | Thursday 09 April 2026 00:41:50 +0000 (0:00:00.223) 0:00:14.075 ******** 2026-04-09 00:41:50.972123 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:41:50.972141 | orchestrator | 2026-04-09 00:41:50.972173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147133 | orchestrator | Thursday 09 April 2026 00:41:50 +0000 (0:00:00.216) 0:00:14.292 ******** 2026-04-09 00:41:58.147235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:41:58.147249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:41:58.147259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:41:58.147268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:41:58.147277 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:41:58.147286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:41:58.147294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:41:58.147308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:41:58.147317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-09 00:41:58.147358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:41:58.147370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:41:58.147379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:41:58.147406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:41:58.147416 | orchestrator | 2026-04-09 00:41:58.147426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147435 | orchestrator | Thursday 09 April 2026 00:41:51 +0000 (0:00:00.346) 0:00:14.638 ******** 2026-04-09 00:41:58.147444 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.147454 | orchestrator | 2026-04-09 00:41:58.147463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147472 | orchestrator | Thursday 09 April 2026 00:41:51 +0000 (0:00:00.183) 0:00:14.822 ******** 2026-04-09 00:41:58.147501 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.147510 | orchestrator | 2026-04-09 00:41:58.147519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147528 | orchestrator | Thursday 09 April 2026 00:41:51 +0000 (0:00:00.208) 0:00:15.030 ******** 2026-04-09 00:41:58.147537 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.147546 | orchestrator | 2026-04-09 00:41:58.147555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147564 | orchestrator | Thursday 09 April 2026 00:41:51 +0000 (0:00:00.171) 0:00:15.202 ******** 2026-04-09 00:41:58.147573 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.147582 | orchestrator | 2026-04-09 00:41:58.147593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147603 | orchestrator | Thursday 09 April 2026 00:41:52 +0000 (0:00:00.179) 0:00:15.381 ******** 2026-04-09 00:41:58.147613 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.147624 | orchestrator | 2026-04-09 00:41:58.147635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147645 | orchestrator | Thursday 09 April 2026 00:41:52 +0000 (0:00:00.473) 0:00:15.854 ******** 2026-04-09 00:41:58.147655 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.147666 | orchestrator | 2026-04-09 00:41:58.147677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147687 | orchestrator | Thursday 09 April 2026 00:41:52 +0000 (0:00:00.176) 0:00:16.031 ******** 2026-04-09 00:41:58.147697 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.147708 | orchestrator | 2026-04-09 00:41:58.147718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147728 | orchestrator | Thursday 09 April 2026 00:41:52 +0000 (0:00:00.173) 0:00:16.205 ******** 2026-04-09 00:41:58.147739 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.147749 | orchestrator | 2026-04-09 00:41:58.147759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147771 | orchestrator | Thursday 09 April 2026 00:41:53 +0000 (0:00:00.230) 0:00:16.435 ******** 2026-04-09 00:41:58.147781 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf) 2026-04-09 00:41:58.147792 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf) 2026-04-09 00:41:58.147802 | orchestrator | 2026-04-09 00:41:58.147812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147823 | orchestrator | Thursday 09 April 2026 00:41:53 +0000 (0:00:00.381) 0:00:16.816 ******** 2026-04-09 00:41:58.147834 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b) 2026-04-09 00:41:58.147844 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b) 2026-04-09 00:41:58.147855 | orchestrator | 2026-04-09 00:41:58.147865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147875 | orchestrator | Thursday 09 April 2026 00:41:53 +0000 (0:00:00.384) 0:00:17.201 ******** 2026-04-09 00:41:58.147886 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48) 2026-04-09 00:41:58.147896 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48) 2026-04-09 00:41:58.147906 | orchestrator | 2026-04-09 00:41:58.147917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.147946 | orchestrator | Thursday 09 April 2026 00:41:54 +0000 (0:00:00.378) 0:00:17.580 ******** 2026-04-09 00:41:58.147957 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec) 2026-04-09 00:41:58.147967 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec) 2026-04-09 00:41:58.147977 | orchestrator | 2026-04-09 00:41:58.147993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:58.148002 | orchestrator | Thursday 09 April 2026 00:41:54 +0000 (0:00:00.422) 0:00:18.003 ******** 2026-04-09 00:41:58.148011 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:41:58.148020 | orchestrator | 2026-04-09 00:41:58.148030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148039 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.328) 0:00:18.331 ******** 2026-04-09 00:41:58.148048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:41:58.148057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:41:58.148072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:41:58.148081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:41:58.148090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:41:58.148099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:41:58.148108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:41:58.148117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:41:58.148126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-09 00:41:58.148135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:41:58.148143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:41:58.148152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:41:58.148161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:41:58.148170 | orchestrator | 2026-04-09 00:41:58.148179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148188 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.388) 0:00:18.719 ******** 2026-04-09 00:41:58.148197 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.148206 | orchestrator | 2026-04-09 00:41:58.148214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148223 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.190) 0:00:18.910 ******** 2026-04-09 00:41:58.148232 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.148241 | orchestrator | 2026-04-09 00:41:58.148250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148259 | orchestrator | Thursday 09 April 2026 00:41:56 +0000 (0:00:00.670) 0:00:19.581 ******** 2026-04-09 00:41:58.148268 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.148277 | orchestrator | 2026-04-09 00:41:58.148286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148295 | orchestrator | Thursday 09 April 2026 00:41:56 +0000 (0:00:00.184) 0:00:19.766 ******** 2026-04-09 00:41:58.148304 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.148313 | orchestrator | 2026-04-09 00:41:58.148322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148355 | orchestrator | Thursday 09 April 2026 00:41:56 +0000 (0:00:00.194) 0:00:19.961 ******** 2026-04-09 00:41:58.148364 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.148372 | orchestrator | 2026-04-09 00:41:58.148381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148389 | orchestrator | Thursday 09 April 2026 00:41:56 +0000 (0:00:00.184) 0:00:20.145 ******** 2026-04-09 00:41:58.148398 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.148412 | orchestrator | 2026-04-09 00:41:58.148421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148430 | orchestrator | Thursday 09 April 2026 00:41:57 +0000 (0:00:00.185) 0:00:20.331 ******** 2026-04-09 00:41:58.148438 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.148447 | orchestrator | 2026-04-09 00:41:58.148455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148464 | orchestrator | Thursday 09 April 2026 00:41:57 +0000 (0:00:00.200) 0:00:20.531 ******** 2026-04-09 00:41:58.148472 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:58.148481 | orchestrator | 2026-04-09 00:41:58.148489 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148498 | orchestrator | Thursday 09 April 2026 00:41:57 +0000 (0:00:00.174) 0:00:20.707 ******** 2026-04-09 00:41:58.148506 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-09 00:41:58.148515 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-09 00:41:58.148524 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-09 00:41:58.148533 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-09 00:41:58.148541 | orchestrator | 2026-04-09 00:41:58.148550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:58.148558 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.657) 0:00:21.364 ******** 2026-04-09 00:41:58.148567 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.625521 | orchestrator | 2026-04-09 00:42:03.625627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:03.625641 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.182) 0:00:21.546 ******** 2026-04-09 00:42:03.625652 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.625662 | orchestrator | 2026-04-09 00:42:03.625672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:03.625682 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.247) 0:00:21.793 ******** 2026-04-09 00:42:03.625692 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.625702 | orchestrator | 2026-04-09 00:42:03.625712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:03.625722 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.151) 0:00:21.945 ******** 2026-04-09 00:42:03.625731 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.625741 | orchestrator | 2026-04-09 00:42:03.625751 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 00:42:03.625761 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.164) 0:00:22.110 ******** 2026-04-09 00:42:03.625771 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-09 00:42:03.625781 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-09 00:42:03.625791 | orchestrator | 2026-04-09 00:42:03.625801 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 00:42:03.625828 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.258) 0:00:22.369 ******** 2026-04-09 00:42:03.625839 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.625848 | orchestrator | 2026-04-09 00:42:03.625858 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 00:42:03.625868 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.104) 0:00:22.474 ******** 2026-04-09 00:42:03.625877 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.625903 | orchestrator | 2026-04-09 00:42:03.625924 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 00:42:03.625940 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.105) 0:00:22.579 ******** 2026-04-09 00:42:03.625951 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.625963 | orchestrator | 2026-04-09 00:42:03.625975 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 00:42:03.625986 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.110) 0:00:22.690 ******** 2026-04-09 00:42:03.626070 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:03.626084 | orchestrator | 2026-04-09 00:42:03.626093 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 00:42:03.626103 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.092) 0:00:22.783 ******** 2026-04-09 00:42:03.626113 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa87c95d-d840-5309-8296-5c77234dd7e9'}}) 2026-04-09 00:42:03.626124 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4752f0c-8dc2-56ff-98d4-03c08b41fecd'}}) 2026-04-09 00:42:03.626133 | orchestrator | 2026-04-09 00:42:03.626143 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 00:42:03.626153 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.142) 0:00:22.925 ******** 2026-04-09 00:42:03.626163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa87c95d-d840-5309-8296-5c77234dd7e9'}})  2026-04-09 00:42:03.626174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4752f0c-8dc2-56ff-98d4-03c08b41fecd'}})  2026-04-09 00:42:03.626184 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.626193 | orchestrator | 2026-04-09 00:42:03.626203 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 00:42:03.626213 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.125) 0:00:23.051 ******** 2026-04-09 00:42:03.626222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa87c95d-d840-5309-8296-5c77234dd7e9'}})  2026-04-09 00:42:03.626232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4752f0c-8dc2-56ff-98d4-03c08b41fecd'}})  2026-04-09 00:42:03.626243 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.626252 | orchestrator | 2026-04-09 00:42:03.626262 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 00:42:03.626272 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.141) 0:00:23.192 ******** 2026-04-09 00:42:03.626281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa87c95d-d840-5309-8296-5c77234dd7e9'}})  2026-04-09 00:42:03.626291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4752f0c-8dc2-56ff-98d4-03c08b41fecd'}})  2026-04-09 00:42:03.626301 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.626310 | orchestrator | 2026-04-09 00:42:03.626320 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 00:42:03.626371 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.119) 0:00:23.312 ******** 2026-04-09 00:42:03.626383 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:03.626393 | orchestrator | 2026-04-09 00:42:03.626403 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 00:42:03.626413 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.104) 0:00:23.417 ******** 2026-04-09 00:42:03.626422 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:03.626432 | orchestrator | 2026-04-09 00:42:03.626441 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 00:42:03.626451 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.127) 0:00:23.545 ******** 2026-04-09 00:42:03.626479 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.626489 | orchestrator | 2026-04-09 00:42:03.626499 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 00:42:03.626509 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.119) 0:00:23.664 ******** 2026-04-09 00:42:03.626518 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.626528 | orchestrator | 2026-04-09 00:42:03.626537 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 00:42:03.626547 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.333) 0:00:23.998 ******** 2026-04-09 00:42:03.626557 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.626575 | orchestrator | 2026-04-09 00:42:03.626585 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 00:42:03.626595 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.148) 0:00:24.146 ******** 2026-04-09 00:42:03.626604 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:42:03.626614 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:42:03.626624 | orchestrator |  "sdb": { 2026-04-09 00:42:03.626634 | orchestrator |  "osd_lvm_uuid": "fa87c95d-d840-5309-8296-5c77234dd7e9" 2026-04-09 00:42:03.626645 | orchestrator |  }, 2026-04-09 00:42:03.626655 | orchestrator |  "sdc": { 2026-04-09 00:42:03.626664 | orchestrator |  "osd_lvm_uuid": "e4752f0c-8dc2-56ff-98d4-03c08b41fecd" 2026-04-09 00:42:03.626674 | orchestrator |  } 2026-04-09 00:42:03.626683 | orchestrator |  } 2026-04-09 00:42:03.626693 | orchestrator | } 2026-04-09 00:42:03.626703 | orchestrator | 2026-04-09 00:42:03.626712 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 00:42:03.626722 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.163) 0:00:24.310 ******** 2026-04-09 00:42:03.626731 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.626741 | orchestrator | 2026-04-09 00:42:03.626751 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 00:42:03.626760 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.132) 0:00:24.442 ******** 2026-04-09 00:42:03.626770 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.626780 | orchestrator | 2026-04-09 00:42:03.626789 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 00:42:03.626799 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.109) 0:00:24.552 ******** 2026-04-09 00:42:03.626808 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:03.626817 | orchestrator | 2026-04-09 00:42:03.626827 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 00:42:03.626843 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.109) 0:00:24.661 ******** 2026-04-09 00:42:03.626853 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 00:42:03.626863 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 00:42:03.626872 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:42:03.626882 | orchestrator |  "sdb": { 2026-04-09 00:42:03.626892 | orchestrator |  "osd_lvm_uuid": "fa87c95d-d840-5309-8296-5c77234dd7e9" 2026-04-09 00:42:03.626901 | orchestrator |  }, 2026-04-09 00:42:03.626911 | orchestrator |  "sdc": { 2026-04-09 00:42:03.626921 | orchestrator |  "osd_lvm_uuid": "e4752f0c-8dc2-56ff-98d4-03c08b41fecd" 2026-04-09 00:42:03.626931 | orchestrator |  } 2026-04-09 00:42:03.626940 | orchestrator |  }, 2026-04-09 00:42:03.626950 | orchestrator |  "lvm_volumes": [ 2026-04-09 00:42:03.626959 | orchestrator |  { 2026-04-09 00:42:03.626969 | orchestrator |  "data": "osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9", 2026-04-09 00:42:03.626979 | orchestrator |  "data_vg": "ceph-fa87c95d-d840-5309-8296-5c77234dd7e9" 2026-04-09 00:42:03.626989 | orchestrator |  }, 2026-04-09 00:42:03.626998 | orchestrator |  { 2026-04-09 00:42:03.627008 | orchestrator |  "data": "osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd", 2026-04-09 00:42:03.627018 | orchestrator |  "data_vg": "ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd" 2026-04-09 00:42:03.627027 | orchestrator |  } 2026-04-09 00:42:03.627037 | orchestrator |  ] 2026-04-09 00:42:03.627047 | orchestrator |  } 2026-04-09 00:42:03.627057 | orchestrator | } 2026-04-09 00:42:03.627066 | orchestrator | 2026-04-09 00:42:03.627079 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 00:42:03.627096 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.174) 0:00:24.835 ******** 2026-04-09 00:42:03.627118 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 00:42:03.627142 | orchestrator | 2026-04-09 00:42:03.627170 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 00:42:03.627187 | orchestrator | 2026-04-09 00:42:03.627203 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:42:03.627220 | orchestrator | Thursday 09 April 2026 00:42:02 +0000 (0:00:00.899) 0:00:25.735 ******** 2026-04-09 00:42:03.627237 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 00:42:03.627255 | orchestrator | 2026-04-09 00:42:03.627273 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:42:03.627289 | orchestrator | Thursday 09 April 2026 00:42:02 +0000 (0:00:00.420) 0:00:26.155 ******** 2026-04-09 00:42:03.627303 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:03.627313 | orchestrator | 2026-04-09 00:42:03.627323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:03.627357 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.539) 0:00:26.695 ******** 2026-04-09 00:42:03.627370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:42:03.627380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:42:03.627389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:42:03.627399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:42:03.627408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:42:03.627428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:42:11.525187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:42:11.525289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:42:11.525298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-09 00:42:11.525305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:42:11.525372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:42:11.525380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:42:11.525387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:42:11.525393 | orchestrator | 2026-04-09 00:42:11.525400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525406 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.361) 0:00:27.056 ******** 2026-04-09 00:42:11.525412 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525418 | orchestrator | 2026-04-09 00:42:11.525424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525429 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.185) 0:00:27.242 ******** 2026-04-09 00:42:11.525435 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525440 | orchestrator | 2026-04-09 00:42:11.525445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525451 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.157) 0:00:27.399 ******** 2026-04-09 00:42:11.525456 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525462 | orchestrator | 2026-04-09 00:42:11.525467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525473 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.212) 0:00:27.612 ******** 2026-04-09 00:42:11.525478 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525484 | orchestrator | 2026-04-09 00:42:11.525489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525495 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.177) 0:00:27.789 ******** 2026-04-09 00:42:11.525516 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525521 | orchestrator | 2026-04-09 00:42:11.525527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525532 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.180) 0:00:27.970 ******** 2026-04-09 00:42:11.525538 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525543 | orchestrator | 2026-04-09 00:42:11.525549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525554 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.181) 0:00:28.151 ******** 2026-04-09 00:42:11.525559 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525565 | orchestrator | 2026-04-09 00:42:11.525570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525576 | orchestrator | Thursday 09 April 2026 00:42:05 +0000 (0:00:00.210) 0:00:28.362 ******** 2026-04-09 00:42:11.525581 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525586 | orchestrator | 2026-04-09 00:42:11.525592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525597 | orchestrator | Thursday 09 April 2026 00:42:05 +0000 (0:00:00.192) 0:00:28.554 ******** 2026-04-09 00:42:11.525603 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae) 2026-04-09 00:42:11.525609 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae) 2026-04-09 00:42:11.525614 | orchestrator | 2026-04-09 00:42:11.525620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525625 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.916) 0:00:29.471 ******** 2026-04-09 00:42:11.525643 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3) 2026-04-09 00:42:11.525649 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3) 2026-04-09 00:42:11.525654 | orchestrator | 2026-04-09 00:42:11.525660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525665 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.755) 0:00:30.227 ******** 2026-04-09 00:42:11.525671 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645) 2026-04-09 00:42:11.525676 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645) 2026-04-09 00:42:11.525681 | orchestrator | 2026-04-09 00:42:11.525687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525692 | orchestrator | Thursday 09 April 2026 00:42:07 +0000 (0:00:00.422) 0:00:30.649 ******** 2026-04-09 00:42:11.525697 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47) 2026-04-09 00:42:11.525703 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47) 2026-04-09 00:42:11.525708 | orchestrator | 2026-04-09 00:42:11.525714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:11.525719 | orchestrator | Thursday 09 April 2026 00:42:07 +0000 (0:00:00.413) 0:00:31.063 ******** 2026-04-09 00:42:11.525724 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:42:11.525730 | orchestrator | 2026-04-09 00:42:11.525735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.525753 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.343) 0:00:31.406 ******** 2026-04-09 00:42:11.525760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:42:11.525766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:42:11.525773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:42:11.525779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:42:11.525791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:42:11.525798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:42:11.525804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:42:11.525811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:42:11.525816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-09 00:42:11.525822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:42:11.525827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:42:11.525833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:42:11.525838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:42:11.525843 | orchestrator | 2026-04-09 00:42:11.525849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.525854 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.364) 0:00:31.771 ******** 2026-04-09 00:42:11.525860 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525865 | orchestrator | 2026-04-09 00:42:11.525871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.525876 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.197) 0:00:31.968 ******** 2026-04-09 00:42:11.525881 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525887 | orchestrator | 2026-04-09 00:42:11.525892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.525898 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.189) 0:00:32.158 ******** 2026-04-09 00:42:11.525903 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525908 | orchestrator | 2026-04-09 00:42:11.525914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.525919 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.182) 0:00:32.340 ******** 2026-04-09 00:42:11.525924 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525930 | orchestrator | 2026-04-09 00:42:11.525935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.525941 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.172) 0:00:32.513 ******** 2026-04-09 00:42:11.525946 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525951 | orchestrator | 2026-04-09 00:42:11.525956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.525962 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.174) 0:00:32.687 ******** 2026-04-09 00:42:11.525967 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525973 | orchestrator | 2026-04-09 00:42:11.525978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.525983 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.525) 0:00:33.212 ******** 2026-04-09 00:42:11.525989 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.525994 | orchestrator | 2026-04-09 00:42:11.526000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.526005 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.191) 0:00:33.404 ******** 2026-04-09 00:42:11.526010 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.526048 | orchestrator | 2026-04-09 00:42:11.526055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.526061 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.167) 0:00:33.572 ******** 2026-04-09 00:42:11.526066 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-09 00:42:11.526076 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-09 00:42:11.526082 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-09 00:42:11.526087 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-09 00:42:11.526093 | orchestrator | 2026-04-09 00:42:11.526098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.526103 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.630) 0:00:34.203 ******** 2026-04-09 00:42:11.526109 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.526114 | orchestrator | 2026-04-09 00:42:11.526120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.526125 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.157) 0:00:34.360 ******** 2026-04-09 00:42:11.526131 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.526136 | orchestrator | 2026-04-09 00:42:11.526141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.526147 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.166) 0:00:34.527 ******** 2026-04-09 00:42:11.526152 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.526158 | orchestrator | 2026-04-09 00:42:11.526163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:11.526169 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.186) 0:00:34.713 ******** 2026-04-09 00:42:11.526174 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:11.526179 | orchestrator | 2026-04-09 00:42:11.526196 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 00:42:14.997723 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.136) 0:00:34.849 ******** 2026-04-09 00:42:14.997825 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-09 00:42:14.997842 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-09 00:42:14.997853 | orchestrator | 2026-04-09 00:42:14.997865 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 00:42:14.997876 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.156) 0:00:35.006 ******** 2026-04-09 00:42:14.997887 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.997898 | orchestrator | 2026-04-09 00:42:14.997909 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 00:42:14.997920 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.105) 0:00:35.111 ******** 2026-04-09 00:42:14.997949 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.997961 | orchestrator | 2026-04-09 00:42:14.997973 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 00:42:14.997983 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.105) 0:00:35.217 ******** 2026-04-09 00:42:14.997993 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.998004 | orchestrator | 2026-04-09 00:42:14.998076 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 00:42:14.998090 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.181) 0:00:35.399 ******** 2026-04-09 00:42:14.998102 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:14.998143 | orchestrator | 2026-04-09 00:42:14.998157 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 00:42:14.998168 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.225) 0:00:35.625 ******** 2026-04-09 00:42:14.998180 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e77990f9-27fa-58e8-a0b8-915245e923bd'}}) 2026-04-09 00:42:14.998202 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6c03351d-b2bb-55a5-9b19-7d0118202256'}}) 2026-04-09 00:42:14.998214 | orchestrator | 2026-04-09 00:42:14.998226 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 00:42:14.998236 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.114) 0:00:35.739 ******** 2026-04-09 00:42:14.998248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e77990f9-27fa-58e8-a0b8-915245e923bd'}})  2026-04-09 00:42:14.998283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6c03351d-b2bb-55a5-9b19-7d0118202256'}})  2026-04-09 00:42:14.998294 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.998305 | orchestrator | 2026-04-09 00:42:14.998316 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 00:42:14.998326 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.102) 0:00:35.842 ******** 2026-04-09 00:42:14.998359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e77990f9-27fa-58e8-a0b8-915245e923bd'}})  2026-04-09 00:42:14.998370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6c03351d-b2bb-55a5-9b19-7d0118202256'}})  2026-04-09 00:42:14.998380 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.998391 | orchestrator | 2026-04-09 00:42:14.998401 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 00:42:14.998411 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.104) 0:00:35.946 ******** 2026-04-09 00:42:14.998421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e77990f9-27fa-58e8-a0b8-915245e923bd'}})  2026-04-09 00:42:14.998432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6c03351d-b2bb-55a5-9b19-7d0118202256'}})  2026-04-09 00:42:14.998442 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.998453 | orchestrator | 2026-04-09 00:42:14.998464 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 00:42:14.998475 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.102) 0:00:36.048 ******** 2026-04-09 00:42:14.998485 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:14.998496 | orchestrator | 2026-04-09 00:42:14.998506 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 00:42:14.998518 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.101) 0:00:36.150 ******** 2026-04-09 00:42:14.998528 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:14.998538 | orchestrator | 2026-04-09 00:42:14.998549 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 00:42:14.998564 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.099) 0:00:36.249 ******** 2026-04-09 00:42:14.998574 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.998584 | orchestrator | 2026-04-09 00:42:14.998594 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 00:42:14.998605 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.091) 0:00:36.340 ******** 2026-04-09 00:42:14.998615 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.998625 | orchestrator | 2026-04-09 00:42:14.998635 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 00:42:14.998645 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.114) 0:00:36.455 ******** 2026-04-09 00:42:14.998655 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.998665 | orchestrator | 2026-04-09 00:42:14.998675 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 00:42:14.998686 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.116) 0:00:36.572 ******** 2026-04-09 00:42:14.998697 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:42:14.998708 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:42:14.998718 | orchestrator |  "sdb": { 2026-04-09 00:42:14.998750 | orchestrator |  "osd_lvm_uuid": "e77990f9-27fa-58e8-a0b8-915245e923bd" 2026-04-09 00:42:14.998762 | orchestrator |  }, 2026-04-09 00:42:14.998773 | orchestrator |  "sdc": { 2026-04-09 00:42:14.998782 | orchestrator |  "osd_lvm_uuid": "6c03351d-b2bb-55a5-9b19-7d0118202256" 2026-04-09 00:42:14.998793 | orchestrator |  } 2026-04-09 00:42:14.998803 | orchestrator |  } 2026-04-09 00:42:14.998814 | orchestrator | } 2026-04-09 00:42:14.998826 | orchestrator | 2026-04-09 00:42:14.998848 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 00:42:14.998860 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.111) 0:00:36.683 ******** 2026-04-09 00:42:14.998870 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.998880 | orchestrator | 2026-04-09 00:42:14.998890 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 00:42:14.998900 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.142) 0:00:36.825 ******** 2026-04-09 00:42:14.998910 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.998921 | orchestrator | 2026-04-09 00:42:14.998931 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 00:42:14.998941 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.308) 0:00:37.134 ******** 2026-04-09 00:42:14.998951 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:14.998961 | orchestrator | 2026-04-09 00:42:14.998972 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 00:42:14.998983 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.122) 0:00:37.257 ******** 2026-04-09 00:42:14.998993 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 00:42:14.999004 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 00:42:14.999015 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:42:14.999026 | orchestrator |  "sdb": { 2026-04-09 00:42:14.999037 | orchestrator |  "osd_lvm_uuid": "e77990f9-27fa-58e8-a0b8-915245e923bd" 2026-04-09 00:42:14.999048 | orchestrator |  }, 2026-04-09 00:42:14.999058 | orchestrator |  "sdc": { 2026-04-09 00:42:14.999069 | orchestrator |  "osd_lvm_uuid": "6c03351d-b2bb-55a5-9b19-7d0118202256" 2026-04-09 00:42:14.999080 | orchestrator |  } 2026-04-09 00:42:14.999091 | orchestrator |  }, 2026-04-09 00:42:14.999102 | orchestrator |  "lvm_volumes": [ 2026-04-09 00:42:14.999113 | orchestrator |  { 2026-04-09 00:42:14.999124 | orchestrator |  "data": "osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd", 2026-04-09 00:42:14.999135 | orchestrator |  "data_vg": "ceph-e77990f9-27fa-58e8-a0b8-915245e923bd" 2026-04-09 00:42:14.999146 | orchestrator |  }, 2026-04-09 00:42:14.999161 | orchestrator |  { 2026-04-09 00:42:14.999172 | orchestrator |  "data": "osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256", 2026-04-09 00:42:14.999182 | orchestrator |  "data_vg": "ceph-6c03351d-b2bb-55a5-9b19-7d0118202256" 2026-04-09 00:42:14.999193 | orchestrator |  } 2026-04-09 00:42:14.999204 | orchestrator |  ] 2026-04-09 00:42:14.999214 | orchestrator |  } 2026-04-09 00:42:14.999225 | orchestrator | } 2026-04-09 00:42:14.999237 | orchestrator | 2026-04-09 00:42:14.999246 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 00:42:14.999256 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.219) 0:00:37.476 ******** 2026-04-09 00:42:14.999267 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 00:42:14.999277 | orchestrator | 2026-04-09 00:42:14.999287 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:42:14.999298 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 00:42:14.999310 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 00:42:14.999320 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 00:42:14.999330 | orchestrator | 2026-04-09 00:42:14.999396 | orchestrator | 2026-04-09 00:42:14.999408 | orchestrator | 2026-04-09 00:42:14.999419 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:42:14.999430 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.831) 0:00:38.308 ******** 2026-04-09 00:42:14.999452 | orchestrator | =============================================================================== 2026-04-09 00:42:14.999464 | orchestrator | Write configuration file ------------------------------------------------ 3.63s 2026-04-09 00:42:14.999475 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2026-04-09 00:42:14.999494 | orchestrator | Add known links to the list of available block devices ------------------ 1.06s 2026-04-09 00:42:14.999505 | orchestrator | Get initial list of available block devices ----------------------------- 0.98s 2026-04-09 00:42:14.999515 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-04-09 00:42:14.999525 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2026-04-09 00:42:14.999536 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.88s 2026-04-09 00:42:14.999546 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-04-09 00:42:14.999557 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-04-09 00:42:14.999567 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-04-09 00:42:14.999578 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-04-09 00:42:14.999587 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-04-09 00:42:14.999598 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-04-09 00:42:14.999620 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-04-09 00:42:15.311443 | orchestrator | Set WAL devices config data --------------------------------------------- 0.57s 2026-04-09 00:42:15.311552 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.56s 2026-04-09 00:42:15.311568 | orchestrator | Print configuration data ------------------------------------------------ 0.56s 2026-04-09 00:42:15.311579 | orchestrator | Print DB devices -------------------------------------------------------- 0.55s 2026-04-09 00:42:15.311590 | orchestrator | Add known partitions to the list of available block devices ------------- 0.53s 2026-04-09 00:42:15.311601 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.50s 2026-04-09 00:42:36.907552 | orchestrator | 2026-04-09 00:42:36 | INFO  | Task 5e197603-9563-4c82-9a6c-6264ab59a95c (sync inventory) is running in background. Output coming soon. 2026-04-09 00:43:04.120097 | orchestrator | 2026-04-09 00:42:38 | INFO  | Starting group_vars file reorganization 2026-04-09 00:43:04.120223 | orchestrator | 2026-04-09 00:42:38 | INFO  | Moved 0 file(s) to their respective directories 2026-04-09 00:43:04.120241 | orchestrator | 2026-04-09 00:42:38 | INFO  | Group_vars file reorganization completed 2026-04-09 00:43:04.120253 | orchestrator | 2026-04-09 00:42:41 | INFO  | Starting variable preparation from inventory 2026-04-09 00:43:04.120265 | orchestrator | 2026-04-09 00:42:43 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-09 00:43:04.120277 | orchestrator | 2026-04-09 00:42:43 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-09 00:43:04.120307 | orchestrator | 2026-04-09 00:42:43 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-09 00:43:04.120319 | orchestrator | 2026-04-09 00:42:43 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-09 00:43:04.120330 | orchestrator | 2026-04-09 00:42:43 | INFO  | Variable preparation completed 2026-04-09 00:43:04.120341 | orchestrator | 2026-04-09 00:42:45 | INFO  | Starting inventory overwrite handling 2026-04-09 00:43:04.120352 | orchestrator | 2026-04-09 00:42:45 | INFO  | Handling group overwrites in 99-overwrite 2026-04-09 00:43:04.120419 | orchestrator | 2026-04-09 00:42:45 | INFO  | Removing group frr:children from 60-generic 2026-04-09 00:43:04.120456 | orchestrator | 2026-04-09 00:42:45 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-09 00:43:04.120468 | orchestrator | 2026-04-09 00:42:45 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-09 00:43:04.120479 | orchestrator | 2026-04-09 00:42:45 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-09 00:43:04.120490 | orchestrator | 2026-04-09 00:42:45 | INFO  | Handling group overwrites in 20-roles 2026-04-09 00:43:04.120501 | orchestrator | 2026-04-09 00:42:45 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-09 00:43:04.120512 | orchestrator | 2026-04-09 00:42:45 | INFO  | Removed 5 group(s) in total 2026-04-09 00:43:04.120523 | orchestrator | 2026-04-09 00:42:45 | INFO  | Inventory overwrite handling completed 2026-04-09 00:43:04.120534 | orchestrator | 2026-04-09 00:42:46 | INFO  | Starting merge of inventory files 2026-04-09 00:43:04.120544 | orchestrator | 2026-04-09 00:42:46 | INFO  | Inventory files merged successfully 2026-04-09 00:43:04.120555 | orchestrator | 2026-04-09 00:42:50 | INFO  | Generating minified hosts file 2026-04-09 00:43:04.120566 | orchestrator | 2026-04-09 00:42:51 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-09 00:43:04.120578 | orchestrator | 2026-04-09 00:42:51 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-09 00:43:04.120589 | orchestrator | 2026-04-09 00:42:52 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-09 00:43:04.120599 | orchestrator | 2026-04-09 00:43:02 | INFO  | Successfully wrote ClusterShell configuration 2026-04-09 00:43:04.120611 | orchestrator | [master 46a9e93] 2026-04-09-00-43 2026-04-09 00:43:04.120627 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-09 00:43:04.120642 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-09 00:43:04.120655 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-09 00:43:04.120668 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-09 00:43:05.363671 | orchestrator | 2026-04-09 00:43:05 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-09 00:43:05.416567 | orchestrator | 2026-04-09 00:43:05 | INFO  | Task c645c4da-6b67-4d07-82e5-8512d0542bd9 (ceph-create-lvm-devices) was prepared for execution. 2026-04-09 00:43:05.416679 | orchestrator | 2026-04-09 00:43:05 | INFO  | It takes a moment until task c645c4da-6b67-4d07-82e5-8512d0542bd9 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-09 00:43:15.972702 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:43:15.972806 | orchestrator | 2.16.14 2026-04-09 00:43:15.972838 | orchestrator | 2026-04-09 00:43:15.972865 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 00:43:15.972888 | orchestrator | 2026-04-09 00:43:15.972907 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:43:15.972920 | orchestrator | Thursday 09 April 2026 00:43:09 +0000 (0:00:00.243) 0:00:00.243 ******** 2026-04-09 00:43:15.972934 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 00:43:15.972948 | orchestrator | 2026-04-09 00:43:15.972962 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:43:15.972975 | orchestrator | Thursday 09 April 2026 00:43:09 +0000 (0:00:00.216) 0:00:00.459 ******** 2026-04-09 00:43:15.972988 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:15.973001 | orchestrator | 2026-04-09 00:43:15.973013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973026 | orchestrator | Thursday 09 April 2026 00:43:09 +0000 (0:00:00.206) 0:00:00.666 ******** 2026-04-09 00:43:15.973065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:43:15.973079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:43:15.973094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:43:15.973107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:43:15.973119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:43:15.973132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:43:15.973145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:43:15.973158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:43:15.973172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-09 00:43:15.973186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:43:15.973201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:43:15.973214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:43:15.973228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:43:15.973243 | orchestrator | 2026-04-09 00:43:15.973257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973281 | orchestrator | Thursday 09 April 2026 00:43:10 +0000 (0:00:00.356) 0:00:01.022 ******** 2026-04-09 00:43:15.973295 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.973309 | orchestrator | 2026-04-09 00:43:15.973323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973335 | orchestrator | Thursday 09 April 2026 00:43:10 +0000 (0:00:00.379) 0:00:01.402 ******** 2026-04-09 00:43:15.973348 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.973361 | orchestrator | 2026-04-09 00:43:15.973401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973415 | orchestrator | Thursday 09 April 2026 00:43:10 +0000 (0:00:00.171) 0:00:01.573 ******** 2026-04-09 00:43:15.973449 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.973465 | orchestrator | 2026-04-09 00:43:15.973477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973486 | orchestrator | Thursday 09 April 2026 00:43:10 +0000 (0:00:00.187) 0:00:01.761 ******** 2026-04-09 00:43:15.973495 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.973503 | orchestrator | 2026-04-09 00:43:15.973512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973521 | orchestrator | Thursday 09 April 2026 00:43:11 +0000 (0:00:00.198) 0:00:01.959 ******** 2026-04-09 00:43:15.973530 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.973538 | orchestrator | 2026-04-09 00:43:15.973547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973556 | orchestrator | Thursday 09 April 2026 00:43:11 +0000 (0:00:00.173) 0:00:02.133 ******** 2026-04-09 00:43:15.973564 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.973572 | orchestrator | 2026-04-09 00:43:15.973581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973590 | orchestrator | Thursday 09 April 2026 00:43:11 +0000 (0:00:00.173) 0:00:02.306 ******** 2026-04-09 00:43:15.973599 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.973607 | orchestrator | 2026-04-09 00:43:15.973616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973625 | orchestrator | Thursday 09 April 2026 00:43:11 +0000 (0:00:00.174) 0:00:02.481 ******** 2026-04-09 00:43:15.973633 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.973654 | orchestrator | 2026-04-09 00:43:15.973663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973671 | orchestrator | Thursday 09 April 2026 00:43:11 +0000 (0:00:00.167) 0:00:02.648 ******** 2026-04-09 00:43:15.973680 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc) 2026-04-09 00:43:15.973690 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc) 2026-04-09 00:43:15.973698 | orchestrator | 2026-04-09 00:43:15.973707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973735 | orchestrator | Thursday 09 April 2026 00:43:12 +0000 (0:00:00.416) 0:00:03.064 ******** 2026-04-09 00:43:15.973744 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7) 2026-04-09 00:43:15.973753 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7) 2026-04-09 00:43:15.973762 | orchestrator | 2026-04-09 00:43:15.973770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973779 | orchestrator | Thursday 09 April 2026 00:43:12 +0000 (0:00:00.397) 0:00:03.462 ******** 2026-04-09 00:43:15.973787 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1) 2026-04-09 00:43:15.973796 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1) 2026-04-09 00:43:15.973804 | orchestrator | 2026-04-09 00:43:15.973813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973821 | orchestrator | Thursday 09 April 2026 00:43:13 +0000 (0:00:00.531) 0:00:03.994 ******** 2026-04-09 00:43:15.973830 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a) 2026-04-09 00:43:15.973838 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a) 2026-04-09 00:43:15.973847 | orchestrator | 2026-04-09 00:43:15.973855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:15.973864 | orchestrator | Thursday 09 April 2026 00:43:13 +0000 (0:00:00.579) 0:00:04.573 ******** 2026-04-09 00:43:15.973872 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:43:15.973881 | orchestrator | 2026-04-09 00:43:15.973889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:15.973903 | orchestrator | Thursday 09 April 2026 00:43:14 +0000 (0:00:00.651) 0:00:05.225 ******** 2026-04-09 00:43:15.973912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:43:15.973920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:43:15.973929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:43:15.973937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:43:15.973946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:43:15.973954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:43:15.973963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:43:15.973971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:43:15.973980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-09 00:43:15.973988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:43:15.973997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:43:15.974005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:43:15.974091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:43:15.974114 | orchestrator | 2026-04-09 00:43:15.974131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:15.974146 | orchestrator | Thursday 09 April 2026 00:43:14 +0000 (0:00:00.382) 0:00:05.608 ******** 2026-04-09 00:43:15.974161 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.974175 | orchestrator | 2026-04-09 00:43:15.974184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:15.974192 | orchestrator | Thursday 09 April 2026 00:43:14 +0000 (0:00:00.182) 0:00:05.791 ******** 2026-04-09 00:43:15.974201 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.974209 | orchestrator | 2026-04-09 00:43:15.974218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:15.974227 | orchestrator | Thursday 09 April 2026 00:43:15 +0000 (0:00:00.172) 0:00:05.963 ******** 2026-04-09 00:43:15.974235 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.974244 | orchestrator | 2026-04-09 00:43:15.974252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:15.974261 | orchestrator | Thursday 09 April 2026 00:43:15 +0000 (0:00:00.169) 0:00:06.133 ******** 2026-04-09 00:43:15.974269 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.974278 | orchestrator | 2026-04-09 00:43:15.974286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:15.974295 | orchestrator | Thursday 09 April 2026 00:43:15 +0000 (0:00:00.186) 0:00:06.319 ******** 2026-04-09 00:43:15.974303 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.974312 | orchestrator | 2026-04-09 00:43:15.974321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:15.974329 | orchestrator | Thursday 09 April 2026 00:43:15 +0000 (0:00:00.183) 0:00:06.502 ******** 2026-04-09 00:43:15.974338 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.974346 | orchestrator | 2026-04-09 00:43:15.974355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:15.974363 | orchestrator | Thursday 09 April 2026 00:43:15 +0000 (0:00:00.182) 0:00:06.685 ******** 2026-04-09 00:43:15.974405 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:15.974414 | orchestrator | 2026-04-09 00:43:15.974432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:23.250453 | orchestrator | Thursday 09 April 2026 00:43:15 +0000 (0:00:00.193) 0:00:06.878 ******** 2026-04-09 00:43:23.250538 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250547 | orchestrator | 2026-04-09 00:43:23.250555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:23.250561 | orchestrator | Thursday 09 April 2026 00:43:16 +0000 (0:00:00.181) 0:00:07.060 ******** 2026-04-09 00:43:23.250567 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-09 00:43:23.250574 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-09 00:43:23.250580 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-09 00:43:23.250586 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-09 00:43:23.250592 | orchestrator | 2026-04-09 00:43:23.250598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:23.250604 | orchestrator | Thursday 09 April 2026 00:43:17 +0000 (0:00:00.861) 0:00:07.922 ******** 2026-04-09 00:43:23.250610 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250616 | orchestrator | 2026-04-09 00:43:23.250621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:23.250627 | orchestrator | Thursday 09 April 2026 00:43:17 +0000 (0:00:00.173) 0:00:08.095 ******** 2026-04-09 00:43:23.250633 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250638 | orchestrator | 2026-04-09 00:43:23.250644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:23.250666 | orchestrator | Thursday 09 April 2026 00:43:17 +0000 (0:00:00.173) 0:00:08.268 ******** 2026-04-09 00:43:23.250673 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250678 | orchestrator | 2026-04-09 00:43:23.250684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:23.250690 | orchestrator | Thursday 09 April 2026 00:43:17 +0000 (0:00:00.180) 0:00:08.449 ******** 2026-04-09 00:43:23.250695 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250701 | orchestrator | 2026-04-09 00:43:23.250707 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 00:43:23.250713 | orchestrator | Thursday 09 April 2026 00:43:17 +0000 (0:00:00.185) 0:00:08.634 ******** 2026-04-09 00:43:23.250718 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250724 | orchestrator | 2026-04-09 00:43:23.250730 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 00:43:23.250735 | orchestrator | Thursday 09 April 2026 00:43:17 +0000 (0:00:00.154) 0:00:08.789 ******** 2026-04-09 00:43:23.250742 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0ecce907-b02d-5708-a2ce-6926a186870f'}}) 2026-04-09 00:43:23.250748 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b063fe53-4e4e-551f-8a45-331436b07c8b'}}) 2026-04-09 00:43:23.250754 | orchestrator | 2026-04-09 00:43:23.250759 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 00:43:23.250765 | orchestrator | Thursday 09 April 2026 00:43:18 +0000 (0:00:00.167) 0:00:08.956 ******** 2026-04-09 00:43:23.250771 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'}) 2026-04-09 00:43:23.250777 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'}) 2026-04-09 00:43:23.250783 | orchestrator | 2026-04-09 00:43:23.250789 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 00:43:23.250795 | orchestrator | Thursday 09 April 2026 00:43:19 +0000 (0:00:01.914) 0:00:10.871 ******** 2026-04-09 00:43:23.250801 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:23.250820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:23.250826 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250832 | orchestrator | 2026-04-09 00:43:23.250837 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 00:43:23.250843 | orchestrator | Thursday 09 April 2026 00:43:20 +0000 (0:00:00.134) 0:00:11.005 ******** 2026-04-09 00:43:23.250849 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'}) 2026-04-09 00:43:23.250855 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'}) 2026-04-09 00:43:23.250861 | orchestrator | 2026-04-09 00:43:23.250866 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 00:43:23.250872 | orchestrator | Thursday 09 April 2026 00:43:21 +0000 (0:00:01.397) 0:00:12.402 ******** 2026-04-09 00:43:23.250878 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:23.250883 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:23.250889 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250895 | orchestrator | 2026-04-09 00:43:23.250901 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 00:43:23.250913 | orchestrator | Thursday 09 April 2026 00:43:21 +0000 (0:00:00.139) 0:00:12.542 ******** 2026-04-09 00:43:23.250930 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250936 | orchestrator | 2026-04-09 00:43:23.250942 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 00:43:23.250948 | orchestrator | Thursday 09 April 2026 00:43:21 +0000 (0:00:00.116) 0:00:12.659 ******** 2026-04-09 00:43:23.250954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:23.250959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:23.250965 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250971 | orchestrator | 2026-04-09 00:43:23.250978 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 00:43:23.250985 | orchestrator | Thursday 09 April 2026 00:43:22 +0000 (0:00:00.286) 0:00:12.945 ******** 2026-04-09 00:43:23.250991 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.250998 | orchestrator | 2026-04-09 00:43:23.251004 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 00:43:23.251011 | orchestrator | Thursday 09 April 2026 00:43:22 +0000 (0:00:00.129) 0:00:13.075 ******** 2026-04-09 00:43:23.251017 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:23.251024 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:23.251030 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.251036 | orchestrator | 2026-04-09 00:43:23.251047 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 00:43:23.251053 | orchestrator | Thursday 09 April 2026 00:43:22 +0000 (0:00:00.133) 0:00:13.209 ******** 2026-04-09 00:43:23.251060 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.251067 | orchestrator | 2026-04-09 00:43:23.251074 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 00:43:23.251081 | orchestrator | Thursday 09 April 2026 00:43:22 +0000 (0:00:00.123) 0:00:13.333 ******** 2026-04-09 00:43:23.251087 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:23.251094 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:23.251101 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.251107 | orchestrator | 2026-04-09 00:43:23.251114 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 00:43:23.251120 | orchestrator | Thursday 09 April 2026 00:43:22 +0000 (0:00:00.140) 0:00:13.473 ******** 2026-04-09 00:43:23.251127 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:23.251133 | orchestrator | 2026-04-09 00:43:23.251140 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 00:43:23.251147 | orchestrator | Thursday 09 April 2026 00:43:22 +0000 (0:00:00.133) 0:00:13.607 ******** 2026-04-09 00:43:23.251153 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:23.251160 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:23.251167 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.251173 | orchestrator | 2026-04-09 00:43:23.251180 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 00:43:23.251192 | orchestrator | Thursday 09 April 2026 00:43:22 +0000 (0:00:00.141) 0:00:13.748 ******** 2026-04-09 00:43:23.251199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:23.251205 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:23.251212 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.251217 | orchestrator | 2026-04-09 00:43:23.251223 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 00:43:23.251229 | orchestrator | Thursday 09 April 2026 00:43:22 +0000 (0:00:00.139) 0:00:13.887 ******** 2026-04-09 00:43:23.251234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:23.251240 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:23.251246 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.251252 | orchestrator | 2026-04-09 00:43:23.251257 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 00:43:23.251263 | orchestrator | Thursday 09 April 2026 00:43:23 +0000 (0:00:00.137) 0:00:14.025 ******** 2026-04-09 00:43:23.251269 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:23.251274 | orchestrator | 2026-04-09 00:43:23.251280 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 00:43:23.251289 | orchestrator | Thursday 09 April 2026 00:43:23 +0000 (0:00:00.130) 0:00:14.156 ******** 2026-04-09 00:43:29.615456 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615522 | orchestrator | 2026-04-09 00:43:29.615531 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 00:43:29.615537 | orchestrator | Thursday 09 April 2026 00:43:23 +0000 (0:00:00.115) 0:00:14.272 ******** 2026-04-09 00:43:29.615542 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615548 | orchestrator | 2026-04-09 00:43:29.615553 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 00:43:29.615558 | orchestrator | Thursday 09 April 2026 00:43:23 +0000 (0:00:00.121) 0:00:14.394 ******** 2026-04-09 00:43:29.615563 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:43:29.615569 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 00:43:29.615575 | orchestrator | } 2026-04-09 00:43:29.615580 | orchestrator | 2026-04-09 00:43:29.615585 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 00:43:29.615591 | orchestrator | Thursday 09 April 2026 00:43:23 +0000 (0:00:00.254) 0:00:14.648 ******** 2026-04-09 00:43:29.615596 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:43:29.615601 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 00:43:29.615606 | orchestrator | } 2026-04-09 00:43:29.615611 | orchestrator | 2026-04-09 00:43:29.615616 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 00:43:29.615621 | orchestrator | Thursday 09 April 2026 00:43:23 +0000 (0:00:00.125) 0:00:14.773 ******** 2026-04-09 00:43:29.615626 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:43:29.615631 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 00:43:29.615636 | orchestrator | } 2026-04-09 00:43:29.615641 | orchestrator | 2026-04-09 00:43:29.615646 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 00:43:29.615652 | orchestrator | Thursday 09 April 2026 00:43:23 +0000 (0:00:00.130) 0:00:14.904 ******** 2026-04-09 00:43:29.615657 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:29.615662 | orchestrator | 2026-04-09 00:43:29.615667 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 00:43:29.615672 | orchestrator | Thursday 09 April 2026 00:43:24 +0000 (0:00:00.633) 0:00:15.537 ******** 2026-04-09 00:43:29.615692 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:29.615697 | orchestrator | 2026-04-09 00:43:29.615703 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 00:43:29.615708 | orchestrator | Thursday 09 April 2026 00:43:25 +0000 (0:00:00.504) 0:00:16.042 ******** 2026-04-09 00:43:29.615713 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:29.615718 | orchestrator | 2026-04-09 00:43:29.615723 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 00:43:29.615728 | orchestrator | Thursday 09 April 2026 00:43:25 +0000 (0:00:00.513) 0:00:16.555 ******** 2026-04-09 00:43:29.615733 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:29.615738 | orchestrator | 2026-04-09 00:43:29.615743 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 00:43:29.615748 | orchestrator | Thursday 09 April 2026 00:43:25 +0000 (0:00:00.162) 0:00:16.717 ******** 2026-04-09 00:43:29.615753 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615758 | orchestrator | 2026-04-09 00:43:29.615763 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 00:43:29.615768 | orchestrator | Thursday 09 April 2026 00:43:25 +0000 (0:00:00.145) 0:00:16.863 ******** 2026-04-09 00:43:29.615773 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615778 | orchestrator | 2026-04-09 00:43:29.615783 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 00:43:29.615788 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.133) 0:00:16.996 ******** 2026-04-09 00:43:29.615793 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:43:29.615799 | orchestrator |  "vgs_report": { 2026-04-09 00:43:29.615804 | orchestrator |  "vg": [] 2026-04-09 00:43:29.615809 | orchestrator |  } 2026-04-09 00:43:29.615815 | orchestrator | } 2026-04-09 00:43:29.615820 | orchestrator | 2026-04-09 00:43:29.615825 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 00:43:29.615830 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.158) 0:00:17.155 ******** 2026-04-09 00:43:29.615835 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615840 | orchestrator | 2026-04-09 00:43:29.615845 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 00:43:29.615850 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.140) 0:00:17.295 ******** 2026-04-09 00:43:29.615855 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615860 | orchestrator | 2026-04-09 00:43:29.615866 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 00:43:29.615871 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.139) 0:00:17.434 ******** 2026-04-09 00:43:29.615876 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615881 | orchestrator | 2026-04-09 00:43:29.615886 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 00:43:29.615891 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.350) 0:00:17.785 ******** 2026-04-09 00:43:29.615896 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615901 | orchestrator | 2026-04-09 00:43:29.615906 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 00:43:29.615911 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.129) 0:00:17.915 ******** 2026-04-09 00:43:29.615916 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615921 | orchestrator | 2026-04-09 00:43:29.615926 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 00:43:29.615931 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.138) 0:00:18.053 ******** 2026-04-09 00:43:29.615936 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615941 | orchestrator | 2026-04-09 00:43:29.615946 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 00:43:29.615951 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.149) 0:00:18.203 ******** 2026-04-09 00:43:29.615956 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615965 | orchestrator | 2026-04-09 00:43:29.615971 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 00:43:29.615976 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.159) 0:00:18.363 ******** 2026-04-09 00:43:29.615990 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.615997 | orchestrator | 2026-04-09 00:43:29.616014 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 00:43:29.616020 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.146) 0:00:18.509 ******** 2026-04-09 00:43:29.616026 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616032 | orchestrator | 2026-04-09 00:43:29.616038 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 00:43:29.616044 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.136) 0:00:18.645 ******** 2026-04-09 00:43:29.616050 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616055 | orchestrator | 2026-04-09 00:43:29.616061 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 00:43:29.616067 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.135) 0:00:18.780 ******** 2026-04-09 00:43:29.616073 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616079 | orchestrator | 2026-04-09 00:43:29.616085 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 00:43:29.616091 | orchestrator | Thursday 09 April 2026 00:43:28 +0000 (0:00:00.134) 0:00:18.916 ******** 2026-04-09 00:43:29.616097 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616103 | orchestrator | 2026-04-09 00:43:29.616109 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 00:43:29.616115 | orchestrator | Thursday 09 April 2026 00:43:28 +0000 (0:00:00.141) 0:00:19.058 ******** 2026-04-09 00:43:29.616121 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616127 | orchestrator | 2026-04-09 00:43:29.616133 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 00:43:29.616138 | orchestrator | Thursday 09 April 2026 00:43:28 +0000 (0:00:00.126) 0:00:19.184 ******** 2026-04-09 00:43:29.616144 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616150 | orchestrator | 2026-04-09 00:43:29.616158 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 00:43:29.616164 | orchestrator | Thursday 09 April 2026 00:43:28 +0000 (0:00:00.125) 0:00:19.310 ******** 2026-04-09 00:43:29.616171 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:29.616178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:29.616184 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616190 | orchestrator | 2026-04-09 00:43:29.616196 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 00:43:29.616201 | orchestrator | Thursday 09 April 2026 00:43:28 +0000 (0:00:00.163) 0:00:19.473 ******** 2026-04-09 00:43:29.616207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:29.616213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:29.616219 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616225 | orchestrator | 2026-04-09 00:43:29.616231 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 00:43:29.616237 | orchestrator | Thursday 09 April 2026 00:43:29 +0000 (0:00:00.441) 0:00:19.915 ******** 2026-04-09 00:43:29.616242 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:29.616248 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:29.616257 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616264 | orchestrator | 2026-04-09 00:43:29.616270 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 00:43:29.616276 | orchestrator | Thursday 09 April 2026 00:43:29 +0000 (0:00:00.168) 0:00:20.083 ******** 2026-04-09 00:43:29.616282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:29.616288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:29.616293 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616298 | orchestrator | 2026-04-09 00:43:29.616304 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 00:43:29.616309 | orchestrator | Thursday 09 April 2026 00:43:29 +0000 (0:00:00.154) 0:00:20.238 ******** 2026-04-09 00:43:29.616314 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:29.616319 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:29.616324 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:29.616329 | orchestrator | 2026-04-09 00:43:29.616334 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 00:43:29.616339 | orchestrator | Thursday 09 April 2026 00:43:29 +0000 (0:00:00.158) 0:00:20.396 ******** 2026-04-09 00:43:29.616347 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:35.478323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:35.478533 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:35.478565 | orchestrator | 2026-04-09 00:43:35.478589 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 00:43:35.478611 | orchestrator | Thursday 09 April 2026 00:43:29 +0000 (0:00:00.239) 0:00:20.636 ******** 2026-04-09 00:43:35.478625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:35.478636 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:35.478647 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:35.478658 | orchestrator | 2026-04-09 00:43:35.478669 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 00:43:35.478681 | orchestrator | Thursday 09 April 2026 00:43:29 +0000 (0:00:00.237) 0:00:20.873 ******** 2026-04-09 00:43:35.478692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:35.478721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:35.478733 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:35.478744 | orchestrator | 2026-04-09 00:43:35.478755 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 00:43:35.478766 | orchestrator | Thursday 09 April 2026 00:43:30 +0000 (0:00:00.221) 0:00:21.094 ******** 2026-04-09 00:43:35.478777 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:35.478789 | orchestrator | 2026-04-09 00:43:35.478825 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 00:43:35.478837 | orchestrator | Thursday 09 April 2026 00:43:30 +0000 (0:00:00.487) 0:00:21.583 ******** 2026-04-09 00:43:35.478874 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:35.478899 | orchestrator | 2026-04-09 00:43:35.478912 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 00:43:35.478924 | orchestrator | Thursday 09 April 2026 00:43:31 +0000 (0:00:00.507) 0:00:22.090 ******** 2026-04-09 00:43:35.478936 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:43:35.478948 | orchestrator | 2026-04-09 00:43:35.478960 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 00:43:35.478974 | orchestrator | Thursday 09 April 2026 00:43:31 +0000 (0:00:00.159) 0:00:22.249 ******** 2026-04-09 00:43:35.478986 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'vg_name': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'}) 2026-04-09 00:43:35.479002 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'vg_name': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'}) 2026-04-09 00:43:35.479013 | orchestrator | 2026-04-09 00:43:35.479026 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 00:43:35.479038 | orchestrator | Thursday 09 April 2026 00:43:31 +0000 (0:00:00.185) 0:00:22.435 ******** 2026-04-09 00:43:35.479051 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:35.479064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:35.479076 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:35.479089 | orchestrator | 2026-04-09 00:43:35.479101 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 00:43:35.479114 | orchestrator | Thursday 09 April 2026 00:43:31 +0000 (0:00:00.174) 0:00:22.609 ******** 2026-04-09 00:43:35.479126 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:35.479139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:35.479151 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:35.479163 | orchestrator | 2026-04-09 00:43:35.479175 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 00:43:35.479188 | orchestrator | Thursday 09 April 2026 00:43:32 +0000 (0:00:00.377) 0:00:22.987 ******** 2026-04-09 00:43:35.479200 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'})  2026-04-09 00:43:35.479213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'})  2026-04-09 00:43:35.479224 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:43:35.479235 | orchestrator | 2026-04-09 00:43:35.479246 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 00:43:35.479261 | orchestrator | Thursday 09 April 2026 00:43:32 +0000 (0:00:00.168) 0:00:23.155 ******** 2026-04-09 00:43:35.479306 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:43:35.479326 | orchestrator |  "lvm_report": { 2026-04-09 00:43:35.479347 | orchestrator |  "lv": [ 2026-04-09 00:43:35.479366 | orchestrator |  { 2026-04-09 00:43:35.479411 | orchestrator |  "lv_name": "osd-block-0ecce907-b02d-5708-a2ce-6926a186870f", 2026-04-09 00:43:35.479432 | orchestrator |  "vg_name": "ceph-0ecce907-b02d-5708-a2ce-6926a186870f" 2026-04-09 00:43:35.479449 | orchestrator |  }, 2026-04-09 00:43:35.479479 | orchestrator |  { 2026-04-09 00:43:35.479490 | orchestrator |  "lv_name": "osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b", 2026-04-09 00:43:35.479501 | orchestrator |  "vg_name": "ceph-b063fe53-4e4e-551f-8a45-331436b07c8b" 2026-04-09 00:43:35.479512 | orchestrator |  } 2026-04-09 00:43:35.479523 | orchestrator |  ], 2026-04-09 00:43:35.479533 | orchestrator |  "pv": [ 2026-04-09 00:43:35.479544 | orchestrator |  { 2026-04-09 00:43:35.479555 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 00:43:35.479565 | orchestrator |  "vg_name": "ceph-0ecce907-b02d-5708-a2ce-6926a186870f" 2026-04-09 00:43:35.479576 | orchestrator |  }, 2026-04-09 00:43:35.479587 | orchestrator |  { 2026-04-09 00:43:35.479597 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 00:43:35.479608 | orchestrator |  "vg_name": "ceph-b063fe53-4e4e-551f-8a45-331436b07c8b" 2026-04-09 00:43:35.479619 | orchestrator |  } 2026-04-09 00:43:35.479630 | orchestrator |  ] 2026-04-09 00:43:35.479640 | orchestrator |  } 2026-04-09 00:43:35.479651 | orchestrator | } 2026-04-09 00:43:35.479664 | orchestrator | 2026-04-09 00:43:35.479682 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 00:43:35.479699 | orchestrator | 2026-04-09 00:43:35.479717 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:43:35.479736 | orchestrator | Thursday 09 April 2026 00:43:32 +0000 (0:00:00.311) 0:00:23.466 ******** 2026-04-09 00:43:35.479756 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 00:43:35.479775 | orchestrator | 2026-04-09 00:43:35.479793 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:43:35.479813 | orchestrator | Thursday 09 April 2026 00:43:32 +0000 (0:00:00.257) 0:00:23.724 ******** 2026-04-09 00:43:35.479832 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:43:35.479850 | orchestrator | 2026-04-09 00:43:35.479869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:35.479888 | orchestrator | Thursday 09 April 2026 00:43:33 +0000 (0:00:00.258) 0:00:23.983 ******** 2026-04-09 00:43:35.479907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:43:35.479925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:43:35.479943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:43:35.479957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:43:35.479968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:43:35.479979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:43:35.479990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:43:35.480001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:43:35.480011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-09 00:43:35.480032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:43:35.480043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:43:35.480054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:43:35.480065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:43:35.480075 | orchestrator | 2026-04-09 00:43:35.480086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:35.480097 | orchestrator | Thursday 09 April 2026 00:43:33 +0000 (0:00:00.478) 0:00:24.462 ******** 2026-04-09 00:43:35.480108 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:35.480129 | orchestrator | 2026-04-09 00:43:35.480147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:35.480165 | orchestrator | Thursday 09 April 2026 00:43:33 +0000 (0:00:00.219) 0:00:24.681 ******** 2026-04-09 00:43:35.480184 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:35.480202 | orchestrator | 2026-04-09 00:43:35.480215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:35.480226 | orchestrator | Thursday 09 April 2026 00:43:34 +0000 (0:00:00.242) 0:00:24.923 ******** 2026-04-09 00:43:35.480236 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:35.480247 | orchestrator | 2026-04-09 00:43:35.480258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:35.480269 | orchestrator | Thursday 09 April 2026 00:43:34 +0000 (0:00:00.252) 0:00:25.176 ******** 2026-04-09 00:43:35.480280 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:35.480291 | orchestrator | 2026-04-09 00:43:35.480329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:35.480341 | orchestrator | Thursday 09 April 2026 00:43:35 +0000 (0:00:00.781) 0:00:25.957 ******** 2026-04-09 00:43:35.480368 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:35.480476 | orchestrator | 2026-04-09 00:43:35.480502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:35.480514 | orchestrator | Thursday 09 April 2026 00:43:35 +0000 (0:00:00.213) 0:00:26.171 ******** 2026-04-09 00:43:35.480525 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:35.480551 | orchestrator | 2026-04-09 00:43:35.480587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:46.218734 | orchestrator | Thursday 09 April 2026 00:43:35 +0000 (0:00:00.211) 0:00:26.383 ******** 2026-04-09 00:43:46.218821 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.218832 | orchestrator | 2026-04-09 00:43:46.218841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:46.218850 | orchestrator | Thursday 09 April 2026 00:43:35 +0000 (0:00:00.222) 0:00:26.606 ******** 2026-04-09 00:43:46.218857 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.218865 | orchestrator | 2026-04-09 00:43:46.218872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:46.218880 | orchestrator | Thursday 09 April 2026 00:43:35 +0000 (0:00:00.265) 0:00:26.871 ******** 2026-04-09 00:43:46.218887 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf) 2026-04-09 00:43:46.218896 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf) 2026-04-09 00:43:46.218903 | orchestrator | 2026-04-09 00:43:46.218910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:46.218918 | orchestrator | Thursday 09 April 2026 00:43:36 +0000 (0:00:00.431) 0:00:27.303 ******** 2026-04-09 00:43:46.218925 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b) 2026-04-09 00:43:46.218932 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b) 2026-04-09 00:43:46.218939 | orchestrator | 2026-04-09 00:43:46.218960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:46.218967 | orchestrator | Thursday 09 April 2026 00:43:36 +0000 (0:00:00.456) 0:00:27.760 ******** 2026-04-09 00:43:46.218974 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48) 2026-04-09 00:43:46.218982 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48) 2026-04-09 00:43:46.218989 | orchestrator | 2026-04-09 00:43:46.218996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:46.219003 | orchestrator | Thursday 09 April 2026 00:43:37 +0000 (0:00:00.452) 0:00:28.213 ******** 2026-04-09 00:43:46.219010 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec) 2026-04-09 00:43:46.219035 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec) 2026-04-09 00:43:46.219043 | orchestrator | 2026-04-09 00:43:46.219050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:43:46.219057 | orchestrator | Thursday 09 April 2026 00:43:37 +0000 (0:00:00.536) 0:00:28.749 ******** 2026-04-09 00:43:46.219065 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:43:46.219072 | orchestrator | 2026-04-09 00:43:46.219079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219086 | orchestrator | Thursday 09 April 2026 00:43:38 +0000 (0:00:00.333) 0:00:29.082 ******** 2026-04-09 00:43:46.219093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:43:46.219100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:43:46.219107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:43:46.219114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:43:46.219121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:43:46.219128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:43:46.219135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:43:46.219143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:43:46.219150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-09 00:43:46.219157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:43:46.219164 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:43:46.219171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:43:46.219178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:43:46.219185 | orchestrator | 2026-04-09 00:43:46.219192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219199 | orchestrator | Thursday 09 April 2026 00:43:38 +0000 (0:00:00.643) 0:00:29.726 ******** 2026-04-09 00:43:46.219206 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219214 | orchestrator | 2026-04-09 00:43:46.219221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219228 | orchestrator | Thursday 09 April 2026 00:43:39 +0000 (0:00:00.229) 0:00:29.955 ******** 2026-04-09 00:43:46.219235 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219242 | orchestrator | 2026-04-09 00:43:46.219249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219256 | orchestrator | Thursday 09 April 2026 00:43:39 +0000 (0:00:00.220) 0:00:30.176 ******** 2026-04-09 00:43:46.219265 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219273 | orchestrator | 2026-04-09 00:43:46.219294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219304 | orchestrator | Thursday 09 April 2026 00:43:39 +0000 (0:00:00.235) 0:00:30.412 ******** 2026-04-09 00:43:46.219312 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219320 | orchestrator | 2026-04-09 00:43:46.219328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219337 | orchestrator | Thursday 09 April 2026 00:43:39 +0000 (0:00:00.215) 0:00:30.627 ******** 2026-04-09 00:43:46.219345 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219353 | orchestrator | 2026-04-09 00:43:46.219362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219396 | orchestrator | Thursday 09 April 2026 00:43:39 +0000 (0:00:00.223) 0:00:30.851 ******** 2026-04-09 00:43:46.219405 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219414 | orchestrator | 2026-04-09 00:43:46.219422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219431 | orchestrator | Thursday 09 April 2026 00:43:40 +0000 (0:00:00.237) 0:00:31.088 ******** 2026-04-09 00:43:46.219439 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219447 | orchestrator | 2026-04-09 00:43:46.219455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219464 | orchestrator | Thursday 09 April 2026 00:43:40 +0000 (0:00:00.190) 0:00:31.278 ******** 2026-04-09 00:43:46.219472 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219480 | orchestrator | 2026-04-09 00:43:46.219488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219501 | orchestrator | Thursday 09 April 2026 00:43:40 +0000 (0:00:00.200) 0:00:31.479 ******** 2026-04-09 00:43:46.219510 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-09 00:43:46.219518 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-09 00:43:46.219527 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-09 00:43:46.219535 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-09 00:43:46.219543 | orchestrator | 2026-04-09 00:43:46.219552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219560 | orchestrator | Thursday 09 April 2026 00:43:41 +0000 (0:00:00.880) 0:00:32.360 ******** 2026-04-09 00:43:46.219568 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219576 | orchestrator | 2026-04-09 00:43:46.219585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219593 | orchestrator | Thursday 09 April 2026 00:43:41 +0000 (0:00:00.199) 0:00:32.559 ******** 2026-04-09 00:43:46.219601 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219609 | orchestrator | 2026-04-09 00:43:46.219618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219626 | orchestrator | Thursday 09 April 2026 00:43:41 +0000 (0:00:00.189) 0:00:32.749 ******** 2026-04-09 00:43:46.219634 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219642 | orchestrator | 2026-04-09 00:43:46.219651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:43:46.219658 | orchestrator | Thursday 09 April 2026 00:43:42 +0000 (0:00:00.659) 0:00:33.408 ******** 2026-04-09 00:43:46.219665 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219672 | orchestrator | 2026-04-09 00:43:46.219679 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 00:43:46.219686 | orchestrator | Thursday 09 April 2026 00:43:42 +0000 (0:00:00.209) 0:00:33.618 ******** 2026-04-09 00:43:46.219693 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219700 | orchestrator | 2026-04-09 00:43:46.219707 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 00:43:46.219714 | orchestrator | Thursday 09 April 2026 00:43:42 +0000 (0:00:00.142) 0:00:33.761 ******** 2026-04-09 00:43:46.219721 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa87c95d-d840-5309-8296-5c77234dd7e9'}}) 2026-04-09 00:43:46.219729 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4752f0c-8dc2-56ff-98d4-03c08b41fecd'}}) 2026-04-09 00:43:46.219736 | orchestrator | 2026-04-09 00:43:46.219743 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 00:43:46.219750 | orchestrator | Thursday 09 April 2026 00:43:43 +0000 (0:00:00.225) 0:00:33.987 ******** 2026-04-09 00:43:46.219758 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'}) 2026-04-09 00:43:46.219767 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'}) 2026-04-09 00:43:46.219779 | orchestrator | 2026-04-09 00:43:46.219786 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 00:43:46.219793 | orchestrator | Thursday 09 April 2026 00:43:44 +0000 (0:00:01.798) 0:00:35.785 ******** 2026-04-09 00:43:46.219800 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:46.219809 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:46.219816 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:46.219823 | orchestrator | 2026-04-09 00:43:46.219830 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 00:43:46.219837 | orchestrator | Thursday 09 April 2026 00:43:45 +0000 (0:00:00.154) 0:00:35.939 ******** 2026-04-09 00:43:46.219844 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'}) 2026-04-09 00:43:46.219856 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'}) 2026-04-09 00:43:51.323753 | orchestrator | 2026-04-09 00:43:51.323848 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 00:43:51.323860 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:01.258) 0:00:37.198 ******** 2026-04-09 00:43:51.323868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:51.323876 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:51.323883 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.323891 | orchestrator | 2026-04-09 00:43:51.323898 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 00:43:51.323904 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.137) 0:00:37.335 ******** 2026-04-09 00:43:51.323911 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.323918 | orchestrator | 2026-04-09 00:43:51.323924 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 00:43:51.323931 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.123) 0:00:37.459 ******** 2026-04-09 00:43:51.323938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:51.323951 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:51.323962 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.323972 | orchestrator | 2026-04-09 00:43:51.323984 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 00:43:51.323997 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.135) 0:00:37.594 ******** 2026-04-09 00:43:51.324010 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324022 | orchestrator | 2026-04-09 00:43:51.324032 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 00:43:51.324054 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.120) 0:00:37.715 ******** 2026-04-09 00:43:51.324061 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:51.324068 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:51.324092 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324099 | orchestrator | 2026-04-09 00:43:51.324106 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 00:43:51.324113 | orchestrator | Thursday 09 April 2026 00:43:46 +0000 (0:00:00.127) 0:00:37.842 ******** 2026-04-09 00:43:51.324125 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324136 | orchestrator | 2026-04-09 00:43:51.324163 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 00:43:51.324176 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.272) 0:00:38.115 ******** 2026-04-09 00:43:51.324187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:51.324199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:51.324211 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324222 | orchestrator | 2026-04-09 00:43:51.324232 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 00:43:51.324243 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.159) 0:00:38.274 ******** 2026-04-09 00:43:51.324254 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:43:51.324262 | orchestrator | 2026-04-09 00:43:51.324268 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 00:43:51.324274 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.133) 0:00:38.408 ******** 2026-04-09 00:43:51.324281 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:51.324289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:51.324296 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324303 | orchestrator | 2026-04-09 00:43:51.324310 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 00:43:51.324317 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.143) 0:00:38.551 ******** 2026-04-09 00:43:51.324324 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:51.324332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:51.324341 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324351 | orchestrator | 2026-04-09 00:43:51.324363 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 00:43:51.324413 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.141) 0:00:38.693 ******** 2026-04-09 00:43:51.324425 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:51.324436 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:51.324448 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324458 | orchestrator | 2026-04-09 00:43:51.324468 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 00:43:51.324477 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.143) 0:00:38.836 ******** 2026-04-09 00:43:51.324485 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324492 | orchestrator | 2026-04-09 00:43:51.324500 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 00:43:51.324511 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.142) 0:00:38.979 ******** 2026-04-09 00:43:51.324531 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324543 | orchestrator | 2026-04-09 00:43:51.324554 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 00:43:51.324570 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.146) 0:00:39.126 ******** 2026-04-09 00:43:51.324578 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324586 | orchestrator | 2026-04-09 00:43:51.324593 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 00:43:51.324599 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.113) 0:00:39.240 ******** 2026-04-09 00:43:51.324605 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:43:51.324611 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 00:43:51.324618 | orchestrator | } 2026-04-09 00:43:51.324624 | orchestrator | 2026-04-09 00:43:51.324630 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 00:43:51.324636 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.135) 0:00:39.376 ******** 2026-04-09 00:43:51.324642 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:43:51.324649 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 00:43:51.324655 | orchestrator | } 2026-04-09 00:43:51.324661 | orchestrator | 2026-04-09 00:43:51.324667 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 00:43:51.324674 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.117) 0:00:39.493 ******** 2026-04-09 00:43:51.324680 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:43:51.324686 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 00:43:51.324692 | orchestrator | } 2026-04-09 00:43:51.324699 | orchestrator | 2026-04-09 00:43:51.324705 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 00:43:51.324711 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.132) 0:00:39.626 ******** 2026-04-09 00:43:51.324717 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:43:51.324723 | orchestrator | 2026-04-09 00:43:51.324729 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 00:43:51.324735 | orchestrator | Thursday 09 April 2026 00:43:49 +0000 (0:00:00.656) 0:00:40.283 ******** 2026-04-09 00:43:51.324742 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:43:51.324748 | orchestrator | 2026-04-09 00:43:51.324754 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 00:43:51.324760 | orchestrator | Thursday 09 April 2026 00:43:49 +0000 (0:00:00.502) 0:00:40.785 ******** 2026-04-09 00:43:51.324766 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:43:51.324772 | orchestrator | 2026-04-09 00:43:51.324779 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 00:43:51.324785 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.487) 0:00:41.272 ******** 2026-04-09 00:43:51.324791 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:43:51.324797 | orchestrator | 2026-04-09 00:43:51.324803 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 00:43:51.324809 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.136) 0:00:41.409 ******** 2026-04-09 00:43:51.324816 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324822 | orchestrator | 2026-04-09 00:43:51.324828 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 00:43:51.324834 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.110) 0:00:41.520 ******** 2026-04-09 00:43:51.324840 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324846 | orchestrator | 2026-04-09 00:43:51.324852 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 00:43:51.324859 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.101) 0:00:41.622 ******** 2026-04-09 00:43:51.324865 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:43:51.324871 | orchestrator |  "vgs_report": { 2026-04-09 00:43:51.324878 | orchestrator |  "vg": [] 2026-04-09 00:43:51.324884 | orchestrator |  } 2026-04-09 00:43:51.324891 | orchestrator | } 2026-04-09 00:43:51.324903 | orchestrator | 2026-04-09 00:43:51.324910 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 00:43:51.324916 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.128) 0:00:41.750 ******** 2026-04-09 00:43:51.324922 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324928 | orchestrator | 2026-04-09 00:43:51.324934 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 00:43:51.324940 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.107) 0:00:41.857 ******** 2026-04-09 00:43:51.324946 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324952 | orchestrator | 2026-04-09 00:43:51.324959 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 00:43:51.324965 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.122) 0:00:41.979 ******** 2026-04-09 00:43:51.324971 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.324977 | orchestrator | 2026-04-09 00:43:51.324983 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 00:43:51.324990 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.119) 0:00:42.099 ******** 2026-04-09 00:43:51.324996 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:51.325002 | orchestrator | 2026-04-09 00:43:51.325013 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 00:43:55.653736 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.132) 0:00:42.231 ******** 2026-04-09 00:43:55.653836 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.653854 | orchestrator | 2026-04-09 00:43:55.653866 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 00:43:55.653878 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.120) 0:00:42.352 ******** 2026-04-09 00:43:55.653885 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.653891 | orchestrator | 2026-04-09 00:43:55.653898 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 00:43:55.653905 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.276) 0:00:42.629 ******** 2026-04-09 00:43:55.653911 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.653917 | orchestrator | 2026-04-09 00:43:55.653924 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 00:43:55.653930 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.127) 0:00:42.757 ******** 2026-04-09 00:43:55.653937 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.653948 | orchestrator | 2026-04-09 00:43:55.653958 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 00:43:55.653968 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:00.129) 0:00:42.886 ******** 2026-04-09 00:43:55.653994 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654004 | orchestrator | 2026-04-09 00:43:55.654054 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 00:43:55.654062 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.131) 0:00:43.017 ******** 2026-04-09 00:43:55.654068 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654074 | orchestrator | 2026-04-09 00:43:55.654081 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 00:43:55.654087 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.111) 0:00:43.129 ******** 2026-04-09 00:43:55.654093 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654105 | orchestrator | 2026-04-09 00:43:55.654112 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 00:43:55.654118 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.120) 0:00:43.249 ******** 2026-04-09 00:43:55.654125 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654131 | orchestrator | 2026-04-09 00:43:55.654138 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 00:43:55.654144 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.117) 0:00:43.366 ******** 2026-04-09 00:43:55.654150 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654176 | orchestrator | 2026-04-09 00:43:55.654183 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 00:43:55.654198 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.149) 0:00:43.515 ******** 2026-04-09 00:43:55.654204 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654210 | orchestrator | 2026-04-09 00:43:55.654216 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 00:43:55.654223 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.124) 0:00:43.640 ******** 2026-04-09 00:43:55.654230 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:55.654238 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:55.654244 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654251 | orchestrator | 2026-04-09 00:43:55.654257 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 00:43:55.654263 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.144) 0:00:43.784 ******** 2026-04-09 00:43:55.654270 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:55.654276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:55.654282 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654288 | orchestrator | 2026-04-09 00:43:55.654295 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 00:43:55.654301 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.128) 0:00:43.912 ******** 2026-04-09 00:43:55.654307 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:55.654313 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:55.654319 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654326 | orchestrator | 2026-04-09 00:43:55.654332 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 00:43:55.654338 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.133) 0:00:44.046 ******** 2026-04-09 00:43:55.654344 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:55.654351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:55.654357 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654364 | orchestrator | 2026-04-09 00:43:55.654412 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 00:43:55.654421 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.279) 0:00:44.326 ******** 2026-04-09 00:43:55.654427 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:55.654433 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:55.654439 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654446 | orchestrator | 2026-04-09 00:43:55.654452 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 00:43:55.654458 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.129) 0:00:44.456 ******** 2026-04-09 00:43:55.654469 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:55.654476 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:55.654482 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654488 | orchestrator | 2026-04-09 00:43:55.654495 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 00:43:55.654501 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.153) 0:00:44.609 ******** 2026-04-09 00:43:55.654507 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:55.654513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:55.654520 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654526 | orchestrator | 2026-04-09 00:43:55.654532 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 00:43:55.654538 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.135) 0:00:44.745 ******** 2026-04-09 00:43:55.654544 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:55.654551 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:55.654557 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654563 | orchestrator | 2026-04-09 00:43:55.654569 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 00:43:55.654575 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.135) 0:00:44.880 ******** 2026-04-09 00:43:55.654582 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:43:55.654588 | orchestrator | 2026-04-09 00:43:55.654594 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 00:43:55.654600 | orchestrator | Thursday 09 April 2026 00:43:54 +0000 (0:00:00.609) 0:00:45.489 ******** 2026-04-09 00:43:55.654606 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:43:55.654613 | orchestrator | 2026-04-09 00:43:55.654619 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 00:43:55.654629 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.511) 0:00:46.001 ******** 2026-04-09 00:43:55.654652 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:43:55.654663 | orchestrator | 2026-04-09 00:43:55.654673 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 00:43:55.654693 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.163) 0:00:46.164 ******** 2026-04-09 00:43:55.654699 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'vg_name': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'}) 2026-04-09 00:43:55.654709 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'vg_name': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'}) 2026-04-09 00:43:55.654719 | orchestrator | 2026-04-09 00:43:55.654729 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 00:43:55.654740 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.162) 0:00:46.326 ******** 2026-04-09 00:43:55.654750 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:55.654789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:43:55.654799 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:43:55.654818 | orchestrator | 2026-04-09 00:43:55.654829 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 00:43:55.654839 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.156) 0:00:46.483 ******** 2026-04-09 00:43:55.654848 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:43:55.654865 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:44:01.020874 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.020999 | orchestrator | 2026-04-09 00:44:01.021019 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 00:44:01.021032 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.148) 0:00:46.631 ******** 2026-04-09 00:44:01.021044 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'})  2026-04-09 00:44:01.021058 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'})  2026-04-09 00:44:01.021069 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:01.021080 | orchestrator | 2026-04-09 00:44:01.021091 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 00:44:01.021102 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.157) 0:00:46.789 ******** 2026-04-09 00:44:01.021113 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:44:01.021124 | orchestrator |  "lvm_report": { 2026-04-09 00:44:01.021137 | orchestrator |  "lv": [ 2026-04-09 00:44:01.021165 | orchestrator |  { 2026-04-09 00:44:01.021177 | orchestrator |  "lv_name": "osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd", 2026-04-09 00:44:01.021189 | orchestrator |  "vg_name": "ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd" 2026-04-09 00:44:01.021200 | orchestrator |  }, 2026-04-09 00:44:01.021226 | orchestrator |  { 2026-04-09 00:44:01.021247 | orchestrator |  "lv_name": "osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9", 2026-04-09 00:44:01.021259 | orchestrator |  "vg_name": "ceph-fa87c95d-d840-5309-8296-5c77234dd7e9" 2026-04-09 00:44:01.021270 | orchestrator |  } 2026-04-09 00:44:01.021281 | orchestrator |  ], 2026-04-09 00:44:01.021292 | orchestrator |  "pv": [ 2026-04-09 00:44:01.021303 | orchestrator |  { 2026-04-09 00:44:01.021314 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 00:44:01.021325 | orchestrator |  "vg_name": "ceph-fa87c95d-d840-5309-8296-5c77234dd7e9" 2026-04-09 00:44:01.021336 | orchestrator |  }, 2026-04-09 00:44:01.021347 | orchestrator |  { 2026-04-09 00:44:01.021358 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 00:44:01.021369 | orchestrator |  "vg_name": "ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd" 2026-04-09 00:44:01.021380 | orchestrator |  } 2026-04-09 00:44:01.021417 | orchestrator |  ] 2026-04-09 00:44:01.021431 | orchestrator |  } 2026-04-09 00:44:01.021443 | orchestrator | } 2026-04-09 00:44:01.021457 | orchestrator | 2026-04-09 00:44:01.021470 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 00:44:01.021483 | orchestrator | 2026-04-09 00:44:01.021496 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:44:01.021510 | orchestrator | Thursday 09 April 2026 00:43:56 +0000 (0:00:00.413) 0:00:47.203 ******** 2026-04-09 00:44:01.021522 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 00:44:01.021535 | orchestrator | 2026-04-09 00:44:01.021548 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:44:01.021562 | orchestrator | Thursday 09 April 2026 00:43:56 +0000 (0:00:00.226) 0:00:47.429 ******** 2026-04-09 00:44:01.021599 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:01.021613 | orchestrator | 2026-04-09 00:44:01.021627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.021640 | orchestrator | Thursday 09 April 2026 00:43:56 +0000 (0:00:00.203) 0:00:47.633 ******** 2026-04-09 00:44:01.021653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:44:01.021667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:44:01.021679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:44:01.021697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:44:01.021711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:44:01.021724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:44:01.021736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:44:01.021761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:44:01.021773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-09 00:44:01.021793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:44:01.021804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:44:01.021815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:44:01.021826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:44:01.021837 | orchestrator | 2026-04-09 00:44:01.021848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.021859 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:00.381) 0:00:48.014 ******** 2026-04-09 00:44:01.021870 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:01.021880 | orchestrator | 2026-04-09 00:44:01.021891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.021902 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:00.193) 0:00:48.208 ******** 2026-04-09 00:44:01.021913 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:01.021924 | orchestrator | 2026-04-09 00:44:01.021935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.021964 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:00.182) 0:00:48.391 ******** 2026-04-09 00:44:01.021976 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:01.021987 | orchestrator | 2026-04-09 00:44:01.021998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.022009 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:00.200) 0:00:48.591 ******** 2026-04-09 00:44:01.022096 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:01.022108 | orchestrator | 2026-04-09 00:44:01.022119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.022129 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:00.174) 0:00:48.765 ******** 2026-04-09 00:44:01.022140 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:01.022151 | orchestrator | 2026-04-09 00:44:01.022162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.022173 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:00.188) 0:00:48.954 ******** 2026-04-09 00:44:01.022184 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:01.022195 | orchestrator | 2026-04-09 00:44:01.022206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.022223 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:00.477) 0:00:49.432 ******** 2026-04-09 00:44:01.022235 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:01.022256 | orchestrator | 2026-04-09 00:44:01.022267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.022278 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:00.183) 0:00:49.615 ******** 2026-04-09 00:44:01.022289 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:01.022299 | orchestrator | 2026-04-09 00:44:01.022310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.022321 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:00.166) 0:00:49.782 ******** 2026-04-09 00:44:01.022332 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae) 2026-04-09 00:44:01.022343 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae) 2026-04-09 00:44:01.022354 | orchestrator | 2026-04-09 00:44:01.022365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.022376 | orchestrator | Thursday 09 April 2026 00:43:59 +0000 (0:00:00.384) 0:00:50.167 ******** 2026-04-09 00:44:01.022387 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3) 2026-04-09 00:44:01.022417 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3) 2026-04-09 00:44:01.022429 | orchestrator | 2026-04-09 00:44:01.022439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.022450 | orchestrator | Thursday 09 April 2026 00:43:59 +0000 (0:00:00.383) 0:00:50.550 ******** 2026-04-09 00:44:01.022461 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645) 2026-04-09 00:44:01.022472 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645) 2026-04-09 00:44:01.022483 | orchestrator | 2026-04-09 00:44:01.022494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.022504 | orchestrator | Thursday 09 April 2026 00:44:00 +0000 (0:00:00.385) 0:00:50.935 ******** 2026-04-09 00:44:01.022515 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47) 2026-04-09 00:44:01.022526 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47) 2026-04-09 00:44:01.022537 | orchestrator | 2026-04-09 00:44:01.022548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:44:01.022559 | orchestrator | Thursday 09 April 2026 00:44:00 +0000 (0:00:00.392) 0:00:51.327 ******** 2026-04-09 00:44:01.022570 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:44:01.022581 | orchestrator | 2026-04-09 00:44:01.022592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:01.022603 | orchestrator | Thursday 09 April 2026 00:44:00 +0000 (0:00:00.300) 0:00:51.628 ******** 2026-04-09 00:44:01.022614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:44:01.022625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:44:01.022635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:44:01.022646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:44:01.022657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:44:01.022668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:44:01.022679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:44:01.022690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:44:01.022701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-09 00:44:01.022718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:44:01.022729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:44:01.022748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:44:09.323315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:44:09.323473 | orchestrator | 2026-04-09 00:44:09.323501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.323516 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:00.379) 0:00:52.007 ******** 2026-04-09 00:44:09.323529 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.323543 | orchestrator | 2026-04-09 00:44:09.323557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.323570 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:00.189) 0:00:52.197 ******** 2026-04-09 00:44:09.323585 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.323598 | orchestrator | 2026-04-09 00:44:09.323613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.323627 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:00.201) 0:00:52.399 ******** 2026-04-09 00:44:09.323642 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.323656 | orchestrator | 2026-04-09 00:44:09.323667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.323688 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:00.482) 0:00:52.881 ******** 2026-04-09 00:44:09.323696 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.323704 | orchestrator | 2026-04-09 00:44:09.323712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.323720 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.197) 0:00:53.078 ******** 2026-04-09 00:44:09.323728 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.323735 | orchestrator | 2026-04-09 00:44:09.323743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.323751 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.179) 0:00:53.258 ******** 2026-04-09 00:44:09.323759 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.323773 | orchestrator | 2026-04-09 00:44:09.323792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.323806 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.178) 0:00:53.436 ******** 2026-04-09 00:44:09.323819 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.323832 | orchestrator | 2026-04-09 00:44:09.323845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.323860 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.177) 0:00:53.614 ******** 2026-04-09 00:44:09.323875 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.323892 | orchestrator | 2026-04-09 00:44:09.323907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.323922 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:00.183) 0:00:53.798 ******** 2026-04-09 00:44:09.323933 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-09 00:44:09.323943 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-09 00:44:09.323953 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-09 00:44:09.323963 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-09 00:44:09.323974 | orchestrator | 2026-04-09 00:44:09.323984 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.323994 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.592) 0:00:54.390 ******** 2026-04-09 00:44:09.324004 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324014 | orchestrator | 2026-04-09 00:44:09.324024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.324070 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.175) 0:00:54.566 ******** 2026-04-09 00:44:09.324082 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324092 | orchestrator | 2026-04-09 00:44:09.324103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.324113 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:00.182) 0:00:54.748 ******** 2026-04-09 00:44:09.324123 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324133 | orchestrator | 2026-04-09 00:44:09.324144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:44:09.324154 | orchestrator | Thursday 09 April 2026 00:44:04 +0000 (0:00:00.187) 0:00:54.936 ******** 2026-04-09 00:44:09.324165 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324174 | orchestrator | 2026-04-09 00:44:09.324185 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 00:44:09.324195 | orchestrator | Thursday 09 April 2026 00:44:04 +0000 (0:00:00.191) 0:00:55.127 ******** 2026-04-09 00:44:09.324205 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324216 | orchestrator | 2026-04-09 00:44:09.324226 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 00:44:09.324236 | orchestrator | Thursday 09 April 2026 00:44:04 +0000 (0:00:00.304) 0:00:55.432 ******** 2026-04-09 00:44:09.324246 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e77990f9-27fa-58e8-a0b8-915245e923bd'}}) 2026-04-09 00:44:09.324257 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6c03351d-b2bb-55a5-9b19-7d0118202256'}}) 2026-04-09 00:44:09.324267 | orchestrator | 2026-04-09 00:44:09.324278 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 00:44:09.324287 | orchestrator | Thursday 09 April 2026 00:44:04 +0000 (0:00:00.185) 0:00:55.618 ******** 2026-04-09 00:44:09.324297 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'}) 2026-04-09 00:44:09.324307 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'}) 2026-04-09 00:44:09.324315 | orchestrator | 2026-04-09 00:44:09.324324 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 00:44:09.324350 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:01.895) 0:00:57.513 ******** 2026-04-09 00:44:09.324360 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:09.324370 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:09.324378 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324387 | orchestrator | 2026-04-09 00:44:09.324396 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 00:44:09.324418 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:00.150) 0:00:57.663 ******** 2026-04-09 00:44:09.324427 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'}) 2026-04-09 00:44:09.324436 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'}) 2026-04-09 00:44:09.324445 | orchestrator | 2026-04-09 00:44:09.324453 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 00:44:09.324462 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:01.405) 0:00:59.069 ******** 2026-04-09 00:44:09.324471 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:09.324487 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:09.324495 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324504 | orchestrator | 2026-04-09 00:44:09.324513 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 00:44:09.324521 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.142) 0:00:59.211 ******** 2026-04-09 00:44:09.324530 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324538 | orchestrator | 2026-04-09 00:44:09.324547 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 00:44:09.324555 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.145) 0:00:59.356 ******** 2026-04-09 00:44:09.324564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:09.324572 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:09.324581 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324589 | orchestrator | 2026-04-09 00:44:09.324598 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 00:44:09.324606 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.149) 0:00:59.506 ******** 2026-04-09 00:44:09.324615 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324623 | orchestrator | 2026-04-09 00:44:09.324632 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 00:44:09.324649 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.122) 0:00:59.628 ******** 2026-04-09 00:44:09.324658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:09.324666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:09.324675 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324684 | orchestrator | 2026-04-09 00:44:09.324692 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 00:44:09.324701 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:00.138) 0:00:59.767 ******** 2026-04-09 00:44:09.324709 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324718 | orchestrator | 2026-04-09 00:44:09.324726 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 00:44:09.324735 | orchestrator | Thursday 09 April 2026 00:44:09 +0000 (0:00:00.146) 0:00:59.913 ******** 2026-04-09 00:44:09.324744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:09.324752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:09.324761 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:09.324769 | orchestrator | 2026-04-09 00:44:09.324778 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 00:44:09.324786 | orchestrator | Thursday 09 April 2026 00:44:09 +0000 (0:00:00.130) 0:01:00.044 ******** 2026-04-09 00:44:09.324795 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:09.324804 | orchestrator | 2026-04-09 00:44:09.324812 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 00:44:09.324821 | orchestrator | Thursday 09 April 2026 00:44:09 +0000 (0:00:00.126) 0:01:00.171 ******** 2026-04-09 00:44:09.324836 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:14.939520 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:14.939597 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939604 | orchestrator | 2026-04-09 00:44:14.939610 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 00:44:14.939615 | orchestrator | Thursday 09 April 2026 00:44:09 +0000 (0:00:00.329) 0:01:00.500 ******** 2026-04-09 00:44:14.939620 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:14.939624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:14.939628 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939632 | orchestrator | 2026-04-09 00:44:14.939647 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 00:44:14.939651 | orchestrator | Thursday 09 April 2026 00:44:09 +0000 (0:00:00.142) 0:01:00.643 ******** 2026-04-09 00:44:14.939655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:14.939658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:14.939662 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939666 | orchestrator | 2026-04-09 00:44:14.939670 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 00:44:14.939673 | orchestrator | Thursday 09 April 2026 00:44:09 +0000 (0:00:00.131) 0:01:00.775 ******** 2026-04-09 00:44:14.939677 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939681 | orchestrator | 2026-04-09 00:44:14.939685 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 00:44:14.939689 | orchestrator | Thursday 09 April 2026 00:44:09 +0000 (0:00:00.108) 0:01:00.884 ******** 2026-04-09 00:44:14.939692 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939696 | orchestrator | 2026-04-09 00:44:14.939700 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 00:44:14.939703 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.134) 0:01:01.018 ******** 2026-04-09 00:44:14.939707 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939711 | orchestrator | 2026-04-09 00:44:14.939715 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 00:44:14.939719 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.125) 0:01:01.144 ******** 2026-04-09 00:44:14.939723 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:44:14.939727 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 00:44:14.939731 | orchestrator | } 2026-04-09 00:44:14.939736 | orchestrator | 2026-04-09 00:44:14.939739 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 00:44:14.939743 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.130) 0:01:01.275 ******** 2026-04-09 00:44:14.939747 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:44:14.939751 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 00:44:14.939754 | orchestrator | } 2026-04-09 00:44:14.939758 | orchestrator | 2026-04-09 00:44:14.939762 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 00:44:14.939766 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.138) 0:01:01.413 ******** 2026-04-09 00:44:14.939769 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:44:14.939773 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 00:44:14.939777 | orchestrator | } 2026-04-09 00:44:14.939781 | orchestrator | 2026-04-09 00:44:14.939784 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 00:44:14.939788 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.126) 0:01:01.540 ******** 2026-04-09 00:44:14.939807 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:14.939811 | orchestrator | 2026-04-09 00:44:14.939815 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 00:44:14.939818 | orchestrator | Thursday 09 April 2026 00:44:11 +0000 (0:00:00.532) 0:01:02.073 ******** 2026-04-09 00:44:14.939822 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:14.939826 | orchestrator | 2026-04-09 00:44:14.939830 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 00:44:14.939833 | orchestrator | Thursday 09 April 2026 00:44:11 +0000 (0:00:00.528) 0:01:02.601 ******** 2026-04-09 00:44:14.939837 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:14.939841 | orchestrator | 2026-04-09 00:44:14.939845 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 00:44:14.939848 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.523) 0:01:03.125 ******** 2026-04-09 00:44:14.939852 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:14.939856 | orchestrator | 2026-04-09 00:44:14.939860 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 00:44:14.939863 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.266) 0:01:03.392 ******** 2026-04-09 00:44:14.939867 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939871 | orchestrator | 2026-04-09 00:44:14.939875 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 00:44:14.939878 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.092) 0:01:03.485 ******** 2026-04-09 00:44:14.939882 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939886 | orchestrator | 2026-04-09 00:44:14.939890 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 00:44:14.939893 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.085) 0:01:03.570 ******** 2026-04-09 00:44:14.939897 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:44:14.939901 | orchestrator |  "vgs_report": { 2026-04-09 00:44:14.939905 | orchestrator |  "vg": [] 2026-04-09 00:44:14.939919 | orchestrator |  } 2026-04-09 00:44:14.939924 | orchestrator | } 2026-04-09 00:44:14.939928 | orchestrator | 2026-04-09 00:44:14.939931 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 00:44:14.939935 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.124) 0:01:03.695 ******** 2026-04-09 00:44:14.939939 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939943 | orchestrator | 2026-04-09 00:44:14.939947 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 00:44:14.939951 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:00.122) 0:01:03.818 ******** 2026-04-09 00:44:14.939954 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939958 | orchestrator | 2026-04-09 00:44:14.939962 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 00:44:14.939965 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.128) 0:01:03.946 ******** 2026-04-09 00:44:14.939969 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939973 | orchestrator | 2026-04-09 00:44:14.939977 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 00:44:14.939984 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.126) 0:01:04.073 ******** 2026-04-09 00:44:14.939988 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.939991 | orchestrator | 2026-04-09 00:44:14.939995 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 00:44:14.939999 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.113) 0:01:04.186 ******** 2026-04-09 00:44:14.940002 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940006 | orchestrator | 2026-04-09 00:44:14.940010 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 00:44:14.940014 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.112) 0:01:04.298 ******** 2026-04-09 00:44:14.940017 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940025 | orchestrator | 2026-04-09 00:44:14.940029 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 00:44:14.940032 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.119) 0:01:04.418 ******** 2026-04-09 00:44:14.940036 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940040 | orchestrator | 2026-04-09 00:44:14.940044 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 00:44:14.940047 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.117) 0:01:04.535 ******** 2026-04-09 00:44:14.940052 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940056 | orchestrator | 2026-04-09 00:44:14.940061 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 00:44:14.940065 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.113) 0:01:04.648 ******** 2026-04-09 00:44:14.940069 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940074 | orchestrator | 2026-04-09 00:44:14.940078 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 00:44:14.940082 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.277) 0:01:04.926 ******** 2026-04-09 00:44:14.940087 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940091 | orchestrator | 2026-04-09 00:44:14.940096 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 00:44:14.940100 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.115) 0:01:05.042 ******** 2026-04-09 00:44:14.940104 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940109 | orchestrator | 2026-04-09 00:44:14.940113 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 00:44:14.940117 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.132) 0:01:05.174 ******** 2026-04-09 00:44:14.940122 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940126 | orchestrator | 2026-04-09 00:44:14.940131 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 00:44:14.940135 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.107) 0:01:05.282 ******** 2026-04-09 00:44:14.940140 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940144 | orchestrator | 2026-04-09 00:44:14.940150 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 00:44:14.940157 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.110) 0:01:05.393 ******** 2026-04-09 00:44:14.940163 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940168 | orchestrator | 2026-04-09 00:44:14.940174 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 00:44:14.940179 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.128) 0:01:05.521 ******** 2026-04-09 00:44:14.940186 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:14.940192 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:14.940199 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940205 | orchestrator | 2026-04-09 00:44:14.940211 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 00:44:14.940217 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.141) 0:01:05.663 ******** 2026-04-09 00:44:14.940222 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:14.940230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:14.940235 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:14.940241 | orchestrator | 2026-04-09 00:44:14.940247 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 00:44:14.940257 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.129) 0:01:05.793 ******** 2026-04-09 00:44:14.940270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:17.959178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:17.959493 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:17.959505 | orchestrator | 2026-04-09 00:44:17.959511 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 00:44:17.959516 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:00.139) 0:01:05.932 ******** 2026-04-09 00:44:17.959521 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:17.959538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:17.959542 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:17.959546 | orchestrator | 2026-04-09 00:44:17.959549 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 00:44:17.959553 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:00.140) 0:01:06.073 ******** 2026-04-09 00:44:17.959557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:17.959561 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:17.959564 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:17.959568 | orchestrator | 2026-04-09 00:44:17.959572 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 00:44:17.959576 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:00.145) 0:01:06.219 ******** 2026-04-09 00:44:17.959580 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:17.959584 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:17.959587 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:17.959591 | orchestrator | 2026-04-09 00:44:17.959595 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 00:44:17.959599 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:00.123) 0:01:06.342 ******** 2026-04-09 00:44:17.959602 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:17.959606 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:17.959610 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:17.959613 | orchestrator | 2026-04-09 00:44:17.959617 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 00:44:17.959621 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:00.285) 0:01:06.627 ******** 2026-04-09 00:44:17.959625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:17.959629 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:17.959633 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:17.959653 | orchestrator | 2026-04-09 00:44:17.959657 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 00:44:17.959660 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:00.146) 0:01:06.774 ******** 2026-04-09 00:44:17.959664 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:17.959669 | orchestrator | 2026-04-09 00:44:17.959673 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 00:44:17.959677 | orchestrator | Thursday 09 April 2026 00:44:16 +0000 (0:00:00.533) 0:01:07.307 ******** 2026-04-09 00:44:17.959680 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:17.959684 | orchestrator | 2026-04-09 00:44:17.959688 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 00:44:17.959692 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.663) 0:01:07.971 ******** 2026-04-09 00:44:17.959695 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:17.959700 | orchestrator | 2026-04-09 00:44:17.959704 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 00:44:17.959709 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.131) 0:01:08.102 ******** 2026-04-09 00:44:17.959714 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'vg_name': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'}) 2026-04-09 00:44:17.959719 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'vg_name': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'}) 2026-04-09 00:44:17.959724 | orchestrator | 2026-04-09 00:44:17.959728 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 00:44:17.959733 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.179) 0:01:08.282 ******** 2026-04-09 00:44:17.959749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:17.959753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:17.959758 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:17.959762 | orchestrator | 2026-04-09 00:44:17.959767 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 00:44:17.959771 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.143) 0:01:08.425 ******** 2026-04-09 00:44:17.959775 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:17.959780 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:17.959784 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:17.959788 | orchestrator | 2026-04-09 00:44:17.959793 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 00:44:17.959797 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.139) 0:01:08.564 ******** 2026-04-09 00:44:17.959802 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'})  2026-04-09 00:44:17.959807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'})  2026-04-09 00:44:17.959811 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:17.959815 | orchestrator | 2026-04-09 00:44:17.959820 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 00:44:17.959824 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.154) 0:01:08.719 ******** 2026-04-09 00:44:17.959829 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:44:17.959833 | orchestrator |  "lvm_report": { 2026-04-09 00:44:17.959838 | orchestrator |  "lv": [ 2026-04-09 00:44:17.959847 | orchestrator |  { 2026-04-09 00:44:17.959852 | orchestrator |  "lv_name": "osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256", 2026-04-09 00:44:17.959857 | orchestrator |  "vg_name": "ceph-6c03351d-b2bb-55a5-9b19-7d0118202256" 2026-04-09 00:44:17.959861 | orchestrator |  }, 2026-04-09 00:44:17.959866 | orchestrator |  { 2026-04-09 00:44:17.959870 | orchestrator |  "lv_name": "osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd", 2026-04-09 00:44:17.959875 | orchestrator |  "vg_name": "ceph-e77990f9-27fa-58e8-a0b8-915245e923bd" 2026-04-09 00:44:17.959879 | orchestrator |  } 2026-04-09 00:44:17.959884 | orchestrator |  ], 2026-04-09 00:44:17.959888 | orchestrator |  "pv": [ 2026-04-09 00:44:17.959892 | orchestrator |  { 2026-04-09 00:44:17.959895 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 00:44:17.959899 | orchestrator |  "vg_name": "ceph-e77990f9-27fa-58e8-a0b8-915245e923bd" 2026-04-09 00:44:17.959903 | orchestrator |  }, 2026-04-09 00:44:17.959907 | orchestrator |  { 2026-04-09 00:44:17.959910 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 00:44:17.959914 | orchestrator |  "vg_name": "ceph-6c03351d-b2bb-55a5-9b19-7d0118202256" 2026-04-09 00:44:17.959918 | orchestrator |  } 2026-04-09 00:44:17.959922 | orchestrator |  ] 2026-04-09 00:44:17.959926 | orchestrator |  } 2026-04-09 00:44:17.959930 | orchestrator | } 2026-04-09 00:44:17.959934 | orchestrator | 2026-04-09 00:44:17.959938 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:44:17.959941 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 00:44:17.959945 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 00:44:17.959949 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 00:44:17.959997 | orchestrator | 2026-04-09 00:44:17.960002 | orchestrator | 2026-04-09 00:44:17.960006 | orchestrator | 2026-04-09 00:44:17.960015 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:44:17.960018 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.138) 0:01:08.858 ******** 2026-04-09 00:44:17.960022 | orchestrator | =============================================================================== 2026-04-09 00:44:17.960026 | orchestrator | Create block VGs -------------------------------------------------------- 5.61s 2026-04-09 00:44:17.960030 | orchestrator | Create block LVs -------------------------------------------------------- 4.06s 2026-04-09 00:44:17.960054 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.82s 2026-04-09 00:44:17.960058 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.68s 2026-04-09 00:44:17.960062 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.63s 2026-04-09 00:44:17.960066 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2026-04-09 00:44:17.960070 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.52s 2026-04-09 00:44:17.960073 | orchestrator | Add known partitions to the list of available block devices ------------- 1.41s 2026-04-09 00:44:17.960080 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2026-04-09 00:44:18.221618 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-04-09 00:44:18.222005 | orchestrator | Print LVM report data --------------------------------------------------- 0.86s 2026-04-09 00:44:18.222068 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-04-09 00:44:18.222075 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-04-09 00:44:18.222081 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2026-04-09 00:44:18.222120 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.70s 2026-04-09 00:44:18.222125 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2026-04-09 00:44:18.222141 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.67s 2026-04-09 00:44:18.222150 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-04-09 00:44:18.222158 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.66s 2026-04-09 00:44:18.222166 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-04-09 00:44:29.681553 | orchestrator | 2026-04-09 00:44:29 | INFO  | Prepare task for execution of facts. 2026-04-09 00:44:29.751695 | orchestrator | 2026-04-09 00:44:29 | INFO  | Task 518dba7a-2ac6-40db-8502-36b5bb8e5b45 (facts) was prepared for execution. 2026-04-09 00:44:29.751779 | orchestrator | 2026-04-09 00:44:29 | INFO  | It takes a moment until task 518dba7a-2ac6-40db-8502-36b5bb8e5b45 (facts) has been started and output is visible here. 2026-04-09 00:44:40.390334 | orchestrator | 2026-04-09 00:44:40.390499 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 00:44:40.390513 | orchestrator | 2026-04-09 00:44:40.390521 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 00:44:40.390537 | orchestrator | Thursday 09 April 2026 00:44:32 +0000 (0:00:00.322) 0:00:00.322 ******** 2026-04-09 00:44:40.390543 | orchestrator | ok: [testbed-manager] 2026-04-09 00:44:40.390558 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:44:40.390564 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:44:40.390571 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:44:40.390578 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:44:40.390584 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:40.390589 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:40.390597 | orchestrator | 2026-04-09 00:44:40.390603 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 00:44:40.390661 | orchestrator | Thursday 09 April 2026 00:44:34 +0000 (0:00:01.265) 0:00:01.588 ******** 2026-04-09 00:44:40.390668 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:44:40.390676 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:44:40.390683 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:44:40.390690 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:44:40.390697 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:44:40.390705 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:40.390713 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:40.390719 | orchestrator | 2026-04-09 00:44:40.390726 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:44:40.390733 | orchestrator | 2026-04-09 00:44:40.390739 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:44:40.390745 | orchestrator | Thursday 09 April 2026 00:44:35 +0000 (0:00:01.039) 0:00:02.627 ******** 2026-04-09 00:44:40.390752 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:44:40.390759 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:44:40.390765 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:44:40.390771 | orchestrator | ok: [testbed-manager] 2026-04-09 00:44:40.390776 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:44:40.390782 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:44:40.390788 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:44:40.390793 | orchestrator | 2026-04-09 00:44:40.390799 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 00:44:40.390805 | orchestrator | 2026-04-09 00:44:40.390811 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 00:44:40.390816 | orchestrator | Thursday 09 April 2026 00:44:39 +0000 (0:00:04.549) 0:00:07.177 ******** 2026-04-09 00:44:40.390821 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:44:40.390827 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:44:40.390854 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:44:40.390860 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:44:40.390866 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:44:40.390871 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:40.390877 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:40.390882 | orchestrator | 2026-04-09 00:44:40.390888 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:44:40.390894 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:40.390901 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:40.390907 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:40.390913 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:40.390919 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:40.390926 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:40.390932 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:44:40.390937 | orchestrator | 2026-04-09 00:44:40.390942 | orchestrator | 2026-04-09 00:44:40.390949 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:44:40.390954 | orchestrator | Thursday 09 April 2026 00:44:40 +0000 (0:00:00.447) 0:00:07.624 ******** 2026-04-09 00:44:40.390960 | orchestrator | =============================================================================== 2026-04-09 00:44:40.390965 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.55s 2026-04-09 00:44:40.390971 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.27s 2026-04-09 00:44:40.390991 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2026-04-09 00:44:40.390998 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-04-09 00:44:51.784080 | orchestrator | 2026-04-09 00:44:51 | INFO  | Prepare task for execution of frr. 2026-04-09 00:44:51.858804 | orchestrator | 2026-04-09 00:44:51 | INFO  | Task c9f703a0-b966-4596-87f8-845e583f756d (frr) was prepared for execution. 2026-04-09 00:44:51.858983 | orchestrator | 2026-04-09 00:44:51 | INFO  | It takes a moment until task c9f703a0-b966-4596-87f8-845e583f756d (frr) has been started and output is visible here. 2026-04-09 00:45:15.663955 | orchestrator | 2026-04-09 00:45:15.664096 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-09 00:45:15.664128 | orchestrator | 2026-04-09 00:45:15.664148 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-09 00:45:15.664166 | orchestrator | Thursday 09 April 2026 00:44:55 +0000 (0:00:00.324) 0:00:00.324 ******** 2026-04-09 00:45:15.664184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:45:15.664204 | orchestrator | 2026-04-09 00:45:15.664224 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-09 00:45:15.664242 | orchestrator | Thursday 09 April 2026 00:44:55 +0000 (0:00:00.220) 0:00:00.545 ******** 2026-04-09 00:45:15.664260 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:15.664279 | orchestrator | 2026-04-09 00:45:15.664298 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-09 00:45:15.664349 | orchestrator | Thursday 09 April 2026 00:44:56 +0000 (0:00:01.544) 0:00:02.089 ******** 2026-04-09 00:45:15.664368 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:15.664386 | orchestrator | 2026-04-09 00:45:15.664404 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-09 00:45:15.664422 | orchestrator | Thursday 09 April 2026 00:45:05 +0000 (0:00:08.644) 0:00:10.734 ******** 2026-04-09 00:45:15.664440 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:15.664506 | orchestrator | 2026-04-09 00:45:15.664527 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-09 00:45:15.664550 | orchestrator | Thursday 09 April 2026 00:45:06 +0000 (0:00:00.930) 0:00:11.665 ******** 2026-04-09 00:45:15.664577 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:15.664596 | orchestrator | 2026-04-09 00:45:15.664615 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-09 00:45:15.664634 | orchestrator | Thursday 09 April 2026 00:45:07 +0000 (0:00:00.843) 0:00:12.508 ******** 2026-04-09 00:45:15.664652 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:15.664672 | orchestrator | 2026-04-09 00:45:15.664692 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-09 00:45:15.664711 | orchestrator | Thursday 09 April 2026 00:45:08 +0000 (0:00:01.095) 0:00:13.604 ******** 2026-04-09 00:45:15.664730 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:45:15.664749 | orchestrator | 2026-04-09 00:45:15.664769 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-09 00:45:15.664787 | orchestrator | Thursday 09 April 2026 00:45:08 +0000 (0:00:00.156) 0:00:13.760 ******** 2026-04-09 00:45:15.664805 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:45:15.664824 | orchestrator | 2026-04-09 00:45:15.664843 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-09 00:45:15.664862 | orchestrator | Thursday 09 April 2026 00:45:08 +0000 (0:00:00.225) 0:00:13.985 ******** 2026-04-09 00:45:15.664881 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:45:15.664901 | orchestrator | 2026-04-09 00:45:15.664921 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-09 00:45:15.664941 | orchestrator | Thursday 09 April 2026 00:45:08 +0000 (0:00:00.147) 0:00:14.133 ******** 2026-04-09 00:45:15.664960 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:45:15.664972 | orchestrator | 2026-04-09 00:45:15.664983 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-09 00:45:15.664995 | orchestrator | Thursday 09 April 2026 00:45:09 +0000 (0:00:00.130) 0:00:14.264 ******** 2026-04-09 00:45:15.665006 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:45:15.665016 | orchestrator | 2026-04-09 00:45:15.665028 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-09 00:45:15.665039 | orchestrator | Thursday 09 April 2026 00:45:09 +0000 (0:00:00.147) 0:00:14.412 ******** 2026-04-09 00:45:15.665050 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:15.665060 | orchestrator | 2026-04-09 00:45:15.665071 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-09 00:45:15.665082 | orchestrator | Thursday 09 April 2026 00:45:10 +0000 (0:00:00.831) 0:00:15.243 ******** 2026-04-09 00:45:15.665093 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-09 00:45:15.665104 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-09 00:45:15.665116 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-09 00:45:15.665126 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-09 00:45:15.665137 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-09 00:45:15.665149 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-09 00:45:15.665183 | orchestrator | 2026-04-09 00:45:15.665210 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-09 00:45:15.665247 | orchestrator | Thursday 09 April 2026 00:45:13 +0000 (0:00:02.930) 0:00:18.174 ******** 2026-04-09 00:45:15.665265 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:15.665281 | orchestrator | 2026-04-09 00:45:15.665299 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-09 00:45:15.665317 | orchestrator | Thursday 09 April 2026 00:45:14 +0000 (0:00:01.086) 0:00:19.261 ******** 2026-04-09 00:45:15.665334 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:15.665350 | orchestrator | 2026-04-09 00:45:15.665366 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:45:15.665385 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 00:45:15.665406 | orchestrator | 2026-04-09 00:45:15.665425 | orchestrator | 2026-04-09 00:45:15.665506 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:45:15.665529 | orchestrator | Thursday 09 April 2026 00:45:15 +0000 (0:00:01.305) 0:00:20.567 ******** 2026-04-09 00:45:15.665548 | orchestrator | =============================================================================== 2026-04-09 00:45:15.665567 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.64s 2026-04-09 00:45:15.665586 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.93s 2026-04-09 00:45:15.665605 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.54s 2026-04-09 00:45:15.665623 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.31s 2026-04-09 00:45:15.665641 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.10s 2026-04-09 00:45:15.665660 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.09s 2026-04-09 00:45:15.665677 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.93s 2026-04-09 00:45:15.665695 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.84s 2026-04-09 00:45:15.665712 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.83s 2026-04-09 00:45:15.665728 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.23s 2026-04-09 00:45:15.665745 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-04-09 00:45:15.665761 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-04-09 00:45:15.665778 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-04-09 00:45:15.665794 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-04-09 00:45:15.665810 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-09 00:45:15.790982 | orchestrator | 2026-04-09 00:45:15.793719 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Apr 9 00:45:15 UTC 2026 2026-04-09 00:45:15.793781 | orchestrator | 2026-04-09 00:45:16.812215 | orchestrator | 2026-04-09 00:45:16 | INFO  | Collection nutshell is prepared for execution 2026-04-09 00:45:16.912055 | orchestrator | 2026-04-09 00:45:16 | INFO  | A [0] - dotfiles 2026-04-09 00:45:27.045449 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [0] - homer 2026-04-09 00:45:27.045657 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [0] - netdata 2026-04-09 00:45:27.045669 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [0] - openstackclient 2026-04-09 00:45:27.045677 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [0] - phpmyadmin 2026-04-09 00:45:27.045685 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [0] - common 2026-04-09 00:45:27.049995 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [1] -- loadbalancer 2026-04-09 00:45:27.050092 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [2] --- opensearch 2026-04-09 00:45:27.050758 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [2] --- mariadb-ng 2026-04-09 00:45:27.050841 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [3] ---- horizon 2026-04-09 00:45:27.050867 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [3] ---- keystone 2026-04-09 00:45:27.051178 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [4] ----- neutron 2026-04-09 00:45:27.051204 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [5] ------ wait-for-nova 2026-04-09 00:45:27.051433 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [6] ------- octavia 2026-04-09 00:45:27.053024 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [4] ----- barbican 2026-04-09 00:45:27.053115 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [4] ----- designate 2026-04-09 00:45:27.053146 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [4] ----- ironic 2026-04-09 00:45:27.053175 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [4] ----- placement 2026-04-09 00:45:27.053410 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [4] ----- magnum 2026-04-09 00:45:27.055392 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [1] -- openvswitch 2026-04-09 00:45:27.055430 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [2] --- ovn 2026-04-09 00:45:27.055547 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [1] -- memcached 2026-04-09 00:45:27.055559 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [1] -- redis 2026-04-09 00:45:27.055911 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [1] -- rabbitmq-ng 2026-04-09 00:45:27.056051 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [0] - kubernetes 2026-04-09 00:45:27.058765 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [1] -- kubeconfig 2026-04-09 00:45:27.058812 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [1] -- copy-kubeconfig 2026-04-09 00:45:27.059189 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [0] - ceph 2026-04-09 00:45:27.061670 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [1] -- ceph-pools 2026-04-09 00:45:27.061713 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [2] --- copy-ceph-keys 2026-04-09 00:45:27.061723 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [3] ---- cephclient 2026-04-09 00:45:27.061732 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-09 00:45:27.061741 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [4] ----- wait-for-keystone 2026-04-09 00:45:27.062190 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-09 00:45:27.062217 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [5] ------ glance 2026-04-09 00:45:27.062330 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [5] ------ cinder 2026-04-09 00:45:27.062352 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [5] ------ nova 2026-04-09 00:45:27.062568 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [4] ----- prometheus 2026-04-09 00:45:27.062735 | orchestrator | 2026-04-09 00:45:27 | INFO  | A [5] ------ grafana 2026-04-09 00:45:27.328124 | orchestrator | 2026-04-09 00:45:27 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-09 00:45:27.328228 | orchestrator | 2026-04-09 00:45:27 | INFO  | Tasks are running in the background 2026-04-09 00:45:29.312975 | orchestrator | 2026-04-09 00:45:29 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-09 00:45:31.511923 | orchestrator | 2026-04-09 00:45:31 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:45:31.512861 | orchestrator | 2026-04-09 00:45:31 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:45:31.515724 | orchestrator | 2026-04-09 00:45:31 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:45:31.516283 | orchestrator | 2026-04-09 00:45:31 | INFO  | Task 520c3026-7200-48ce-9412-03744403f201 is in state STARTED 2026-04-09 00:45:31.516963 | orchestrator | 2026-04-09 00:45:31 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:45:31.518816 | orchestrator | 2026-04-09 00:45:31 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:45:31.521728 | orchestrator | 2026-04-09 00:45:31 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:45:31.521792 | orchestrator | 2026-04-09 00:45:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:34.563944 | orchestrator | 2026-04-09 00:45:34 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:45:34.566448 | orchestrator | 2026-04-09 00:45:34 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:45:34.568397 | orchestrator | 2026-04-09 00:45:34 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:45:34.572133 | orchestrator | 2026-04-09 00:45:34 | INFO  | Task 520c3026-7200-48ce-9412-03744403f201 is in state STARTED 2026-04-09 00:45:34.572776 | orchestrator | 2026-04-09 00:45:34 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:45:34.573317 | orchestrator | 2026-04-09 00:45:34 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:45:34.574113 | orchestrator | 2026-04-09 00:45:34 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:45:34.574141 | orchestrator | 2026-04-09 00:45:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:37.612062 | orchestrator | 2026-04-09 00:45:37 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:45:37.612892 | orchestrator | 2026-04-09 00:45:37 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:45:37.612910 | orchestrator | 2026-04-09 00:45:37 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:45:37.614408 | orchestrator | 2026-04-09 00:45:37 | INFO  | Task 520c3026-7200-48ce-9412-03744403f201 is in state STARTED 2026-04-09 00:45:37.614923 | orchestrator | 2026-04-09 00:45:37 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:45:37.615451 | orchestrator | 2026-04-09 00:45:37 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:45:37.618481 | orchestrator | 2026-04-09 00:45:37 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:45:37.618496 | orchestrator | 2026-04-09 00:45:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:40.739251 | orchestrator | 2026-04-09 00:45:40 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:45:40.739545 | orchestrator | 2026-04-09 00:45:40 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:45:40.742621 | orchestrator | 2026-04-09 00:45:40 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:45:40.743245 | orchestrator | 2026-04-09 00:45:40 | INFO  | Task 520c3026-7200-48ce-9412-03744403f201 is in state STARTED 2026-04-09 00:45:40.744639 | orchestrator | 2026-04-09 00:45:40 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:45:40.745943 | orchestrator | 2026-04-09 00:45:40 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:45:40.752053 | orchestrator | 2026-04-09 00:45:40 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:45:40.752140 | orchestrator | 2026-04-09 00:45:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:43.871345 | orchestrator | 2026-04-09 00:45:43 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:45:43.871447 | orchestrator | 2026-04-09 00:45:43 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:45:43.874793 | orchestrator | 2026-04-09 00:45:43 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:45:43.875036 | orchestrator | 2026-04-09 00:45:43 | INFO  | Task 520c3026-7200-48ce-9412-03744403f201 is in state STARTED 2026-04-09 00:45:43.875666 | orchestrator | 2026-04-09 00:45:43 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:45:43.876542 | orchestrator | 2026-04-09 00:45:43 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:45:43.879630 | orchestrator | 2026-04-09 00:45:43 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:45:43.879675 | orchestrator | 2026-04-09 00:45:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:47.024662 | orchestrator | 2026-04-09 00:45:46 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:45:47.024753 | orchestrator | 2026-04-09 00:45:46 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:45:47.024764 | orchestrator | 2026-04-09 00:45:46 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:45:47.024773 | orchestrator | 2026-04-09 00:45:46 | INFO  | Task 520c3026-7200-48ce-9412-03744403f201 is in state STARTED 2026-04-09 00:45:47.024780 | orchestrator | 2026-04-09 00:45:46 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:45:47.024788 | orchestrator | 2026-04-09 00:45:46 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:45:47.024795 | orchestrator | 2026-04-09 00:45:46 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:45:47.024802 | orchestrator | 2026-04-09 00:45:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:49.996375 | orchestrator | 2026-04-09 00:45:49 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:45:49.996464 | orchestrator | 2026-04-09 00:45:49 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:45:49.996511 | orchestrator | 2026-04-09 00:45:49 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:45:49.998955 | orchestrator | 2026-04-09 00:45:49.998998 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-09 00:45:49.999004 | orchestrator | 2026-04-09 00:45:49.999009 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-09 00:45:49.999013 | orchestrator | Thursday 09 April 2026 00:45:37 +0000 (0:00:00.447) 0:00:00.447 ******** 2026-04-09 00:45:49.999017 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:49.999022 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:45:49.999026 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:45:49.999030 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:45:49.999034 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:45:49.999038 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:45:49.999042 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:45:49.999046 | orchestrator | 2026-04-09 00:45:49.999049 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-09 00:45:49.999053 | orchestrator | Thursday 09 April 2026 00:45:41 +0000 (0:00:04.405) 0:00:04.852 ******** 2026-04-09 00:45:49.999074 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-09 00:45:49.999083 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-09 00:45:49.999087 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-09 00:45:49.999091 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-09 00:45:49.999094 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-09 00:45:49.999098 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-09 00:45:49.999102 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-09 00:45:49.999106 | orchestrator | 2026-04-09 00:45:49.999110 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-09 00:45:49.999114 | orchestrator | Thursday 09 April 2026 00:45:43 +0000 (0:00:01.980) 0:00:06.832 ******** 2026-04-09 00:45:49.999120 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:42.933250', 'end': '2026-04-09 00:45:42.939925', 'delta': '0:00:00.006675', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:45:49.999128 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:43.007870', 'end': '2026-04-09 00:45:43.016227', 'delta': '0:00:00.008357', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:45:49.999132 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:43.655918', 'end': '2026-04-09 00:45:43.664081', 'delta': '0:00:00.008163', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:45:49.999172 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:43.544891', 'end': '2026-04-09 00:45:43.554046', 'delta': '0:00:00.009155', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:45:49.999196 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:43.126480', 'end': '2026-04-09 00:45:43.134799', 'delta': '0:00:00.008319', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:45:49.999203 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:43.008144', 'end': '2026-04-09 00:45:43.018518', 'delta': '0:00:00.010374', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:45:49.999209 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:45:42.876378', 'end': '2026-04-09 00:45:42.880077', 'delta': '0:00:00.003699', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:45:49.999215 | orchestrator | 2026-04-09 00:45:49.999221 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-09 00:45:49.999226 | orchestrator | Thursday 09 April 2026 00:45:45 +0000 (0:00:01.650) 0:00:08.483 ******** 2026-04-09 00:45:49.999233 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-09 00:45:49.999239 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-09 00:45:49.999245 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-09 00:45:49.999252 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-09 00:45:49.999258 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-09 00:45:49.999262 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-09 00:45:49.999265 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-09 00:45:49.999269 | orchestrator | 2026-04-09 00:45:49.999273 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-09 00:45:49.999277 | orchestrator | Thursday 09 April 2026 00:45:46 +0000 (0:00:01.181) 0:00:09.664 ******** 2026-04-09 00:45:49.999280 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-09 00:45:49.999285 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-09 00:45:49.999288 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-09 00:45:49.999297 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-09 00:45:49.999300 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-09 00:45:49.999304 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-09 00:45:49.999308 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-09 00:45:49.999312 | orchestrator | 2026-04-09 00:45:49.999316 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:45:49.999325 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:49.999331 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:49.999335 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:49.999339 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:49.999342 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:49.999349 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:49.999353 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:49.999356 | orchestrator | 2026-04-09 00:45:49.999360 | orchestrator | 2026-04-09 00:45:49.999364 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:45:49.999368 | orchestrator | Thursday 09 April 2026 00:45:49 +0000 (0:00:02.523) 0:00:12.188 ******** 2026-04-09 00:45:49.999371 | orchestrator | =============================================================================== 2026-04-09 00:45:49.999375 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.41s 2026-04-09 00:45:49.999379 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.52s 2026-04-09 00:45:49.999383 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.98s 2026-04-09 00:45:49.999386 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.65s 2026-04-09 00:45:49.999390 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.18s 2026-04-09 00:45:49.999394 | orchestrator | 2026-04-09 00:45:49 | INFO  | Task 520c3026-7200-48ce-9412-03744403f201 is in state SUCCESS 2026-04-09 00:45:49.999398 | orchestrator | 2026-04-09 00:45:49 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:45:49.999402 | orchestrator | 2026-04-09 00:45:49 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:45:50.001997 | orchestrator | 2026-04-09 00:45:50 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:45:50.002068 | orchestrator | 2026-04-09 00:45:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:53.328146 | orchestrator | 2026-04-09 00:45:53 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:45:53.329720 | orchestrator | 2026-04-09 00:45:53 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:45:53.335730 | orchestrator | 2026-04-09 00:45:53 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:45:53.335794 | orchestrator | 2026-04-09 00:45:53 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:45:53.335800 | orchestrator | 2026-04-09 00:45:53 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:45:53.338321 | orchestrator | 2026-04-09 00:45:53 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:45:53.338359 | orchestrator | 2026-04-09 00:45:53 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:45:53.338365 | orchestrator | 2026-04-09 00:45:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:56.431180 | orchestrator | 2026-04-09 00:45:56 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:45:56.431355 | orchestrator | 2026-04-09 00:45:56 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:45:56.436108 | orchestrator | 2026-04-09 00:45:56 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:45:56.443286 | orchestrator | 2026-04-09 00:45:56 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:45:56.447468 | orchestrator | 2026-04-09 00:45:56 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:45:56.450389 | orchestrator | 2026-04-09 00:45:56 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:45:56.451537 | orchestrator | 2026-04-09 00:45:56 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:45:56.451710 | orchestrator | 2026-04-09 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:59.536217 | orchestrator | 2026-04-09 00:45:59 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:45:59.537098 | orchestrator | 2026-04-09 00:45:59 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:45:59.538595 | orchestrator | 2026-04-09 00:45:59 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:45:59.539515 | orchestrator | 2026-04-09 00:45:59 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:45:59.541541 | orchestrator | 2026-04-09 00:45:59 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:45:59.541915 | orchestrator | 2026-04-09 00:45:59 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:45:59.543608 | orchestrator | 2026-04-09 00:45:59 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:45:59.543638 | orchestrator | 2026-04-09 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:02.642007 | orchestrator | 2026-04-09 00:46:02 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:02.643206 | orchestrator | 2026-04-09 00:46:02 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:02.643968 | orchestrator | 2026-04-09 00:46:02 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:02.644928 | orchestrator | 2026-04-09 00:46:02 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:02.645466 | orchestrator | 2026-04-09 00:46:02 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:02.645964 | orchestrator | 2026-04-09 00:46:02 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:46:02.647001 | orchestrator | 2026-04-09 00:46:02 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:46:02.647060 | orchestrator | 2026-04-09 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:05.807863 | orchestrator | 2026-04-09 00:46:05 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:05.807993 | orchestrator | 2026-04-09 00:46:05 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:05.808003 | orchestrator | 2026-04-09 00:46:05 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:05.808010 | orchestrator | 2026-04-09 00:46:05 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:05.808017 | orchestrator | 2026-04-09 00:46:05 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:05.808024 | orchestrator | 2026-04-09 00:46:05 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:46:05.808031 | orchestrator | 2026-04-09 00:46:05 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:46:05.808038 | orchestrator | 2026-04-09 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:09.052563 | orchestrator | 2026-04-09 00:46:09 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:09.055119 | orchestrator | 2026-04-09 00:46:09 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:09.056583 | orchestrator | 2026-04-09 00:46:09 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:09.061921 | orchestrator | 2026-04-09 00:46:09 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:09.062070 | orchestrator | 2026-04-09 00:46:09 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:09.064648 | orchestrator | 2026-04-09 00:46:09 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:46:09.067670 | orchestrator | 2026-04-09 00:46:09 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:46:09.067734 | orchestrator | 2026-04-09 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:12.197274 | orchestrator | 2026-04-09 00:46:12 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:12.200191 | orchestrator | 2026-04-09 00:46:12 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:12.203274 | orchestrator | 2026-04-09 00:46:12 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:12.206564 | orchestrator | 2026-04-09 00:46:12 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:12.208232 | orchestrator | 2026-04-09 00:46:12 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:12.211813 | orchestrator | 2026-04-09 00:46:12 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:46:12.212059 | orchestrator | 2026-04-09 00:46:12 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:46:12.212599 | orchestrator | 2026-04-09 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:15.290535 | orchestrator | 2026-04-09 00:46:15 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:15.290624 | orchestrator | 2026-04-09 00:46:15 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:15.290636 | orchestrator | 2026-04-09 00:46:15 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:15.290644 | orchestrator | 2026-04-09 00:46:15 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:15.290653 | orchestrator | 2026-04-09 00:46:15 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:15.291084 | orchestrator | 2026-04-09 00:46:15 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:46:15.291098 | orchestrator | 2026-04-09 00:46:15 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state STARTED 2026-04-09 00:46:15.291108 | orchestrator | 2026-04-09 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:18.340126 | orchestrator | 2026-04-09 00:46:18 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:18.342108 | orchestrator | 2026-04-09 00:46:18 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:18.344448 | orchestrator | 2026-04-09 00:46:18 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:18.344687 | orchestrator | 2026-04-09 00:46:18 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:18.347638 | orchestrator | 2026-04-09 00:46:18 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:18.350566 | orchestrator | 2026-04-09 00:46:18 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:46:18.353149 | orchestrator | 2026-04-09 00:46:18 | INFO  | Task 0bfd12ed-bbe4-43f7-830b-ac41e29c4a34 is in state SUCCESS 2026-04-09 00:46:18.353182 | orchestrator | 2026-04-09 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:21.607346 | orchestrator | 2026-04-09 00:46:21 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:21.607407 | orchestrator | 2026-04-09 00:46:21 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:21.607416 | orchestrator | 2026-04-09 00:46:21 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:21.619027 | orchestrator | 2026-04-09 00:46:21 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:21.622437 | orchestrator | 2026-04-09 00:46:21 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:21.629147 | orchestrator | 2026-04-09 00:46:21 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:46:21.629222 | orchestrator | 2026-04-09 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:24.701647 | orchestrator | 2026-04-09 00:46:24 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:24.701737 | orchestrator | 2026-04-09 00:46:24 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:24.701748 | orchestrator | 2026-04-09 00:46:24 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:24.702100 | orchestrator | 2026-04-09 00:46:24 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:24.702129 | orchestrator | 2026-04-09 00:46:24 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:24.702137 | orchestrator | 2026-04-09 00:46:24 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state STARTED 2026-04-09 00:46:24.702144 | orchestrator | 2026-04-09 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:27.706321 | orchestrator | 2026-04-09 00:46:27 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:27.706407 | orchestrator | 2026-04-09 00:46:27 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:27.706417 | orchestrator | 2026-04-09 00:46:27 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:27.706448 | orchestrator | 2026-04-09 00:46:27 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:27.706455 | orchestrator | 2026-04-09 00:46:27 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:27.706461 | orchestrator | 2026-04-09 00:46:27 | INFO  | Task 12195eb9-e057-4e57-bf5e-970c7f8ed43a is in state SUCCESS 2026-04-09 00:46:27.706468 | orchestrator | 2026-04-09 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:30.749450 | orchestrator | 2026-04-09 00:46:30 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:30.751051 | orchestrator | 2026-04-09 00:46:30 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:30.752408 | orchestrator | 2026-04-09 00:46:30 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:30.754535 | orchestrator | 2026-04-09 00:46:30 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:30.755913 | orchestrator | 2026-04-09 00:46:30 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:30.755947 | orchestrator | 2026-04-09 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:33.797077 | orchestrator | 2026-04-09 00:46:33 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:33.801128 | orchestrator | 2026-04-09 00:46:33 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:33.801200 | orchestrator | 2026-04-09 00:46:33 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:33.801910 | orchestrator | 2026-04-09 00:46:33 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:33.802222 | orchestrator | 2026-04-09 00:46:33 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:33.802248 | orchestrator | 2026-04-09 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:36.882154 | orchestrator | 2026-04-09 00:46:36 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:36.883906 | orchestrator | 2026-04-09 00:46:36 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:36.885420 | orchestrator | 2026-04-09 00:46:36 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:36.886679 | orchestrator | 2026-04-09 00:46:36 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:36.891678 | orchestrator | 2026-04-09 00:46:36 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:36.891742 | orchestrator | 2026-04-09 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:39.958902 | orchestrator | 2026-04-09 00:46:39 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:39.960811 | orchestrator | 2026-04-09 00:46:39 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:39.961177 | orchestrator | 2026-04-09 00:46:39 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:39.961715 | orchestrator | 2026-04-09 00:46:39 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:39.962600 | orchestrator | 2026-04-09 00:46:39 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:39.962636 | orchestrator | 2026-04-09 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:43.008955 | orchestrator | 2026-04-09 00:46:43 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:43.028896 | orchestrator | 2026-04-09 00:46:43 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:43.042283 | orchestrator | 2026-04-09 00:46:43 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:43.042339 | orchestrator | 2026-04-09 00:46:43 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:43.042411 | orchestrator | 2026-04-09 00:46:43 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:43.042721 | orchestrator | 2026-04-09 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:46.088617 | orchestrator | 2026-04-09 00:46:46 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:46.089180 | orchestrator | 2026-04-09 00:46:46 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:46.090600 | orchestrator | 2026-04-09 00:46:46 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:46.091654 | orchestrator | 2026-04-09 00:46:46 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:46.092893 | orchestrator | 2026-04-09 00:46:46 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:46.092916 | orchestrator | 2026-04-09 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:49.153539 | orchestrator | 2026-04-09 00:46:49 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:49.153599 | orchestrator | 2026-04-09 00:46:49 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:49.153606 | orchestrator | 2026-04-09 00:46:49 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:49.153612 | orchestrator | 2026-04-09 00:46:49 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:49.153618 | orchestrator | 2026-04-09 00:46:49 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:49.153623 | orchestrator | 2026-04-09 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:52.197633 | orchestrator | 2026-04-09 00:46:52 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:52.199228 | orchestrator | 2026-04-09 00:46:52 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state STARTED 2026-04-09 00:46:52.201579 | orchestrator | 2026-04-09 00:46:52 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:52.203162 | orchestrator | 2026-04-09 00:46:52 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:52.204477 | orchestrator | 2026-04-09 00:46:52 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:52.204545 | orchestrator | 2026-04-09 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:55.244294 | orchestrator | 2026-04-09 00:46:55 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:55.245125 | orchestrator | 2026-04-09 00:46:55.245170 | orchestrator | 2026-04-09 00:46:55.245180 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-09 00:46:55.245188 | orchestrator | 2026-04-09 00:46:55.245195 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-09 00:46:55.245203 | orchestrator | Thursday 09 April 2026 00:45:38 +0000 (0:00:01.375) 0:00:01.375 ******** 2026-04-09 00:46:55.245234 | orchestrator | ok: [testbed-manager] => { 2026-04-09 00:46:55.245244 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-09 00:46:55.245251 | orchestrator | } 2026-04-09 00:46:55.245258 | orchestrator | 2026-04-09 00:46:55.245264 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-09 00:46:55.245271 | orchestrator | Thursday 09 April 2026 00:45:39 +0000 (0:00:00.494) 0:00:01.870 ******** 2026-04-09 00:46:55.245277 | orchestrator | ok: [testbed-manager] 2026-04-09 00:46:55.245284 | orchestrator | 2026-04-09 00:46:55.245290 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-09 00:46:55.245296 | orchestrator | Thursday 09 April 2026 00:45:42 +0000 (0:00:02.968) 0:00:04.839 ******** 2026-04-09 00:46:55.245302 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-09 00:46:55.245309 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-09 00:46:55.245315 | orchestrator | 2026-04-09 00:46:55.245321 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-09 00:46:55.245327 | orchestrator | Thursday 09 April 2026 00:45:43 +0000 (0:00:01.634) 0:00:06.473 ******** 2026-04-09 00:46:55.245333 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:55.245339 | orchestrator | 2026-04-09 00:46:55.245345 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-09 00:46:55.245360 | orchestrator | Thursday 09 April 2026 00:45:46 +0000 (0:00:02.757) 0:00:09.231 ******** 2026-04-09 00:46:55.245364 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:55.245368 | orchestrator | 2026-04-09 00:46:55.245374 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-09 00:46:55.245380 | orchestrator | Thursday 09 April 2026 00:45:48 +0000 (0:00:02.046) 0:00:11.277 ******** 2026-04-09 00:46:55.245386 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-09 00:46:55.245414 | orchestrator | ok: [testbed-manager] 2026-04-09 00:46:55.245421 | orchestrator | 2026-04-09 00:46:55.245427 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-09 00:46:55.245433 | orchestrator | Thursday 09 April 2026 00:46:14 +0000 (0:00:25.498) 0:00:36.776 ******** 2026-04-09 00:46:55.245441 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:55.245447 | orchestrator | 2026-04-09 00:46:55.245453 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:46:55.245459 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:46:55.245468 | orchestrator | 2026-04-09 00:46:55.245474 | orchestrator | 2026-04-09 00:46:55.245480 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:46:55.245516 | orchestrator | Thursday 09 April 2026 00:46:17 +0000 (0:00:02.961) 0:00:39.737 ******** 2026-04-09 00:46:55.245523 | orchestrator | =============================================================================== 2026-04-09 00:46:55.245529 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.50s 2026-04-09 00:46:55.245535 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.97s 2026-04-09 00:46:55.245541 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.96s 2026-04-09 00:46:55.245546 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.76s 2026-04-09 00:46:55.245552 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.05s 2026-04-09 00:46:55.245559 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.63s 2026-04-09 00:46:55.245565 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.49s 2026-04-09 00:46:55.245571 | orchestrator | 2026-04-09 00:46:55.245577 | orchestrator | 2026-04-09 00:46:55.245584 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-09 00:46:55.245590 | orchestrator | 2026-04-09 00:46:55.245608 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-09 00:46:55.245613 | orchestrator | Thursday 09 April 2026 00:45:38 +0000 (0:00:01.358) 0:00:01.358 ******** 2026-04-09 00:46:55.245617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-09 00:46:55.245622 | orchestrator | 2026-04-09 00:46:55.245626 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-09 00:46:55.245630 | orchestrator | Thursday 09 April 2026 00:45:39 +0000 (0:00:00.403) 0:00:01.762 ******** 2026-04-09 00:46:55.245635 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-09 00:46:55.245640 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-09 00:46:55.245646 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-09 00:46:55.245652 | orchestrator | 2026-04-09 00:46:55.245658 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-09 00:46:55.245668 | orchestrator | Thursday 09 April 2026 00:45:42 +0000 (0:00:03.596) 0:00:05.359 ******** 2026-04-09 00:46:55.245689 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:55.245696 | orchestrator | 2026-04-09 00:46:55.245702 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-09 00:46:55.245708 | orchestrator | Thursday 09 April 2026 00:45:46 +0000 (0:00:03.613) 0:00:08.972 ******** 2026-04-09 00:46:55.245731 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-09 00:46:55.245738 | orchestrator | ok: [testbed-manager] 2026-04-09 00:46:55.245744 | orchestrator | 2026-04-09 00:46:55.245750 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-09 00:46:55.245756 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:32.212) 0:00:41.185 ******** 2026-04-09 00:46:55.245763 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:55.245769 | orchestrator | 2026-04-09 00:46:55.245776 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-09 00:46:55.245782 | orchestrator | Thursday 09 April 2026 00:46:19 +0000 (0:00:00.824) 0:00:42.010 ******** 2026-04-09 00:46:55.245789 | orchestrator | ok: [testbed-manager] 2026-04-09 00:46:55.245795 | orchestrator | 2026-04-09 00:46:55.245802 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-09 00:46:55.245809 | orchestrator | Thursday 09 April 2026 00:46:20 +0000 (0:00:01.000) 0:00:43.010 ******** 2026-04-09 00:46:55.245815 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:55.245821 | orchestrator | 2026-04-09 00:46:55.245827 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-09 00:46:55.245834 | orchestrator | Thursday 09 April 2026 00:46:23 +0000 (0:00:02.507) 0:00:45.517 ******** 2026-04-09 00:46:55.245840 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:55.245847 | orchestrator | 2026-04-09 00:46:55.245853 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-09 00:46:55.245859 | orchestrator | Thursday 09 April 2026 00:46:24 +0000 (0:00:00.860) 0:00:46.378 ******** 2026-04-09 00:46:55.245865 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:55.245871 | orchestrator | 2026-04-09 00:46:55.245878 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-09 00:46:55.245884 | orchestrator | Thursday 09 April 2026 00:46:25 +0000 (0:00:01.022) 0:00:47.401 ******** 2026-04-09 00:46:55.245891 | orchestrator | ok: [testbed-manager] 2026-04-09 00:46:55.245897 | orchestrator | 2026-04-09 00:46:55.245904 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:46:55.245910 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:46:55.245917 | orchestrator | 2026-04-09 00:46:55.245923 | orchestrator | 2026-04-09 00:46:55.245930 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:46:55.245944 | orchestrator | Thursday 09 April 2026 00:46:25 +0000 (0:00:00.452) 0:00:47.853 ******** 2026-04-09 00:46:55.245951 | orchestrator | =============================================================================== 2026-04-09 00:46:55.245958 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.21s 2026-04-09 00:46:55.245964 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.61s 2026-04-09 00:46:55.245971 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.60s 2026-04-09 00:46:55.245978 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.51s 2026-04-09 00:46:55.245984 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.02s 2026-04-09 00:46:55.245991 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.00s 2026-04-09 00:46:55.245997 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.86s 2026-04-09 00:46:55.246003 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.82s 2026-04-09 00:46:55.246150 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.45s 2026-04-09 00:46:55.246162 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.40s 2026-04-09 00:46:55.246169 | orchestrator | 2026-04-09 00:46:55.246175 | orchestrator | 2026-04-09 00:46:55.246181 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-09 00:46:55.246187 | orchestrator | 2026-04-09 00:46:55.246194 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-09 00:46:55.246200 | orchestrator | Thursday 09 April 2026 00:45:54 +0000 (0:00:00.483) 0:00:00.483 ******** 2026-04-09 00:46:55.246206 | orchestrator | ok: [testbed-manager] 2026-04-09 00:46:55.246212 | orchestrator | 2026-04-09 00:46:55.246219 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-09 00:46:55.246224 | orchestrator | Thursday 09 April 2026 00:45:56 +0000 (0:00:02.319) 0:00:02.803 ******** 2026-04-09 00:46:55.246231 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-09 00:46:55.246237 | orchestrator | 2026-04-09 00:46:55.246244 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-09 00:46:55.246250 | orchestrator | Thursday 09 April 2026 00:45:57 +0000 (0:00:00.980) 0:00:03.783 ******** 2026-04-09 00:46:55.246256 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:55.246262 | orchestrator | 2026-04-09 00:46:55.246268 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-09 00:46:55.246275 | orchestrator | Thursday 09 April 2026 00:45:58 +0000 (0:00:01.619) 0:00:05.403 ******** 2026-04-09 00:46:55.246281 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-09 00:46:55.246287 | orchestrator | ok: [testbed-manager] 2026-04-09 00:46:55.246293 | orchestrator | 2026-04-09 00:46:55.246299 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-09 00:46:55.246305 | orchestrator | Thursday 09 April 2026 00:46:48 +0000 (0:00:49.758) 0:00:55.162 ******** 2026-04-09 00:46:55.246311 | orchestrator | changed: [testbed-manager] 2026-04-09 00:46:55.246317 | orchestrator | 2026-04-09 00:46:55.246323 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:46:55.246336 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:46:55.246343 | orchestrator | 2026-04-09 00:46:55.246349 | orchestrator | 2026-04-09 00:46:55.246356 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:46:55.246370 | orchestrator | Thursday 09 April 2026 00:46:52 +0000 (0:00:04.221) 0:00:59.383 ******** 2026-04-09 00:46:55.246376 | orchestrator | =============================================================================== 2026-04-09 00:46:55.246382 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 49.76s 2026-04-09 00:46:55.246388 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.22s 2026-04-09 00:46:55.246401 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.32s 2026-04-09 00:46:55.246407 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.62s 2026-04-09 00:46:55.246413 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.98s 2026-04-09 00:46:55.246419 | orchestrator | 2026-04-09 00:46:55 | INFO  | Task bebe3590-b230-4a12-a194-7222f8505af0 is in state SUCCESS 2026-04-09 00:46:55.246426 | orchestrator | 2026-04-09 00:46:55 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:55.249065 | orchestrator | 2026-04-09 00:46:55 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:55.249122 | orchestrator | 2026-04-09 00:46:55 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:55.249128 | orchestrator | 2026-04-09 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:58.284409 | orchestrator | 2026-04-09 00:46:58 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:46:58.284610 | orchestrator | 2026-04-09 00:46:58 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:46:58.284807 | orchestrator | 2026-04-09 00:46:58 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:46:58.285717 | orchestrator | 2026-04-09 00:46:58 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:46:58.285787 | orchestrator | 2026-04-09 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:01.322939 | orchestrator | 2026-04-09 00:47:01 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:01.322990 | orchestrator | 2026-04-09 00:47:01 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:01.323342 | orchestrator | 2026-04-09 00:47:01 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:47:01.324186 | orchestrator | 2026-04-09 00:47:01 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:01.324213 | orchestrator | 2026-04-09 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:04.358130 | orchestrator | 2026-04-09 00:47:04 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:04.358816 | orchestrator | 2026-04-09 00:47:04 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:04.358849 | orchestrator | 2026-04-09 00:47:04 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state STARTED 2026-04-09 00:47:04.359647 | orchestrator | 2026-04-09 00:47:04 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:04.359672 | orchestrator | 2026-04-09 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:07.418766 | orchestrator | 2026-04-09 00:47:07 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:07.421338 | orchestrator | 2026-04-09 00:47:07 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:07.423298 | orchestrator | 2026-04-09 00:47:07 | INFO  | Task 87960382-fde1-4017-b4ee-fb5241c1265d is in state SUCCESS 2026-04-09 00:47:07.423346 | orchestrator | 2026-04-09 00:47:07.423354 | orchestrator | 2026-04-09 00:47:07.423360 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:47:07.423367 | orchestrator | 2026-04-09 00:47:07.423373 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:47:07.423379 | orchestrator | Thursday 09 April 2026 00:45:38 +0000 (0:00:00.772) 0:00:00.772 ******** 2026-04-09 00:47:07.423397 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-09 00:47:07.423403 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-09 00:47:07.423409 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-09 00:47:07.423414 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-09 00:47:07.423420 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-09 00:47:07.423426 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-09 00:47:07.423435 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-09 00:47:07.423440 | orchestrator | 2026-04-09 00:47:07.423446 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-09 00:47:07.423452 | orchestrator | 2026-04-09 00:47:07.423458 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-09 00:47:07.423464 | orchestrator | Thursday 09 April 2026 00:45:39 +0000 (0:00:01.564) 0:00:02.337 ******** 2026-04-09 00:47:07.423476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:47:07.423483 | orchestrator | 2026-04-09 00:47:07.423522 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-09 00:47:07.423529 | orchestrator | Thursday 09 April 2026 00:45:41 +0000 (0:00:01.446) 0:00:03.783 ******** 2026-04-09 00:47:07.423535 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:07.423541 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:07.423547 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:07.423553 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:07.423559 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:07.423564 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:07.423571 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:07.423577 | orchestrator | 2026-04-09 00:47:07.423583 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-09 00:47:07.423589 | orchestrator | Thursday 09 April 2026 00:45:43 +0000 (0:00:02.641) 0:00:06.425 ******** 2026-04-09 00:47:07.423595 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:07.423601 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:07.423607 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:07.423613 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:07.423618 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:07.423625 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:07.423631 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:07.423637 | orchestrator | 2026-04-09 00:47:07.423643 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-09 00:47:07.423648 | orchestrator | Thursday 09 April 2026 00:45:47 +0000 (0:00:03.471) 0:00:09.896 ******** 2026-04-09 00:47:07.423654 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:07.423660 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:07.423666 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:07.423671 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:07.423677 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:07.423683 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:07.423689 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:07.423694 | orchestrator | 2026-04-09 00:47:07.423700 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-09 00:47:07.423706 | orchestrator | Thursday 09 April 2026 00:45:48 +0000 (0:00:01.591) 0:00:11.488 ******** 2026-04-09 00:47:07.423712 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:07.423718 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:07.423724 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:07.423730 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:07.423736 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:07.423742 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:07.423754 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:07.423760 | orchestrator | 2026-04-09 00:47:07.423766 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-09 00:47:07.423773 | orchestrator | Thursday 09 April 2026 00:46:00 +0000 (0:00:11.461) 0:00:22.949 ******** 2026-04-09 00:47:07.423779 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:07.423785 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:07.423791 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:07.423797 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:07.423803 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:07.423809 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:07.423815 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:07.423821 | orchestrator | 2026-04-09 00:47:07.423826 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-09 00:47:07.423832 | orchestrator | Thursday 09 April 2026 00:46:38 +0000 (0:00:38.231) 0:01:01.181 ******** 2026-04-09 00:47:07.423838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:47:07.423845 | orchestrator | 2026-04-09 00:47:07.423851 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-09 00:47:07.423857 | orchestrator | Thursday 09 April 2026 00:46:39 +0000 (0:00:01.435) 0:01:02.617 ******** 2026-04-09 00:47:07.423863 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-09 00:47:07.423869 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-09 00:47:07.423886 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-09 00:47:07.423892 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-09 00:47:07.423899 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-09 00:47:07.423905 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-09 00:47:07.423912 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-09 00:47:07.423919 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-09 00:47:07.423925 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-09 00:47:07.423932 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-09 00:47:07.423939 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-09 00:47:07.423945 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-09 00:47:07.423952 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-09 00:47:07.423959 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-09 00:47:07.423965 | orchestrator | 2026-04-09 00:47:07.423972 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-09 00:47:07.423980 | orchestrator | Thursday 09 April 2026 00:46:44 +0000 (0:00:04.149) 0:01:06.767 ******** 2026-04-09 00:47:07.423987 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:07.423994 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:07.424001 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:07.424007 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:07.424013 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:07.424019 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:07.424026 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:07.424031 | orchestrator | 2026-04-09 00:47:07.424037 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-09 00:47:07.424043 | orchestrator | Thursday 09 April 2026 00:46:45 +0000 (0:00:01.032) 0:01:07.799 ******** 2026-04-09 00:47:07.424050 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:07.424056 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:07.424063 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:07.424069 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:07.424075 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:07.424081 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:07.424092 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:07.424097 | orchestrator | 2026-04-09 00:47:07.424104 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-09 00:47:07.424109 | orchestrator | Thursday 09 April 2026 00:46:46 +0000 (0:00:01.214) 0:01:09.014 ******** 2026-04-09 00:47:07.424115 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:07.424121 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:07.424128 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:07.424135 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:07.424140 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:07.424147 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:07.424153 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:07.424159 | orchestrator | 2026-04-09 00:47:07.424165 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-09 00:47:07.424171 | orchestrator | Thursday 09 April 2026 00:46:48 +0000 (0:00:02.372) 0:01:11.386 ******** 2026-04-09 00:47:07.424177 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:07.424183 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:07.424189 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:07.424195 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:07.424201 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:07.424207 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:07.424213 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:07.424219 | orchestrator | 2026-04-09 00:47:07.424225 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-09 00:47:07.424290 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:02.152) 0:01:13.539 ******** 2026-04-09 00:47:07.424299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-09 00:47:07.424307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:47:07.424313 | orchestrator | 2026-04-09 00:47:07.424320 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-09 00:47:07.424326 | orchestrator | Thursday 09 April 2026 00:46:52 +0000 (0:00:01.449) 0:01:14.988 ******** 2026-04-09 00:47:07.424332 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:07.424339 | orchestrator | 2026-04-09 00:47:07.424346 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-09 00:47:07.424352 | orchestrator | Thursday 09 April 2026 00:46:54 +0000 (0:00:01.908) 0:01:16.896 ******** 2026-04-09 00:47:07.424358 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:07.424365 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:07.424371 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:07.424377 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:07.424383 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:07.424389 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:07.424395 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:07.424401 | orchestrator | 2026-04-09 00:47:07.424407 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:47:07.424413 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:07.424420 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:07.424426 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:07.424439 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:07.424445 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:07.424455 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:07.424461 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:47:07.424467 | orchestrator | 2026-04-09 00:47:07.424474 | orchestrator | 2026-04-09 00:47:07.424479 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:47:07.424485 | orchestrator | Thursday 09 April 2026 00:47:05 +0000 (0:00:11.104) 0:01:28.001 ******** 2026-04-09 00:47:07.424507 | orchestrator | =============================================================================== 2026-04-09 00:47:07.424515 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 38.23s 2026-04-09 00:47:07.424521 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.46s 2026-04-09 00:47:07.424527 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.10s 2026-04-09 00:47:07.424533 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.15s 2026-04-09 00:47:07.424539 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.47s 2026-04-09 00:47:07.424545 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.64s 2026-04-09 00:47:07.424551 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.37s 2026-04-09 00:47:07.424558 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.15s 2026-04-09 00:47:07.424563 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.91s 2026-04-09 00:47:07.424569 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.59s 2026-04-09 00:47:07.424575 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.56s 2026-04-09 00:47:07.424580 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.45s 2026-04-09 00:47:07.424587 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.45s 2026-04-09 00:47:07.424593 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.44s 2026-04-09 00:47:07.424598 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.21s 2026-04-09 00:47:07.424604 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.03s 2026-04-09 00:47:07.426144 | orchestrator | 2026-04-09 00:47:07 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:07.426180 | orchestrator | 2026-04-09 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:10.468249 | orchestrator | 2026-04-09 00:47:10 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:10.470365 | orchestrator | 2026-04-09 00:47:10 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:10.472723 | orchestrator | 2026-04-09 00:47:10 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:10.472769 | orchestrator | 2026-04-09 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:13.520564 | orchestrator | 2026-04-09 00:47:13 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:13.522442 | orchestrator | 2026-04-09 00:47:13 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:13.523171 | orchestrator | 2026-04-09 00:47:13 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:13.523193 | orchestrator | 2026-04-09 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:16.580332 | orchestrator | 2026-04-09 00:47:16 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:16.585460 | orchestrator | 2026-04-09 00:47:16 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:16.590232 | orchestrator | 2026-04-09 00:47:16 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:16.590350 | orchestrator | 2026-04-09 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:19.646883 | orchestrator | 2026-04-09 00:47:19 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:19.658375 | orchestrator | 2026-04-09 00:47:19 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:19.659199 | orchestrator | 2026-04-09 00:47:19 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:19.660010 | orchestrator | 2026-04-09 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:22.724845 | orchestrator | 2026-04-09 00:47:22 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:22.727139 | orchestrator | 2026-04-09 00:47:22 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:22.730199 | orchestrator | 2026-04-09 00:47:22 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:22.730253 | orchestrator | 2026-04-09 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:25.779907 | orchestrator | 2026-04-09 00:47:25 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:25.781833 | orchestrator | 2026-04-09 00:47:25 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:25.783340 | orchestrator | 2026-04-09 00:47:25 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:25.783734 | orchestrator | 2026-04-09 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:28.834554 | orchestrator | 2026-04-09 00:47:28 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:28.839504 | orchestrator | 2026-04-09 00:47:28 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:28.841051 | orchestrator | 2026-04-09 00:47:28 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:28.841122 | orchestrator | 2026-04-09 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:31.915344 | orchestrator | 2026-04-09 00:47:31 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:31.917425 | orchestrator | 2026-04-09 00:47:31 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:31.919485 | orchestrator | 2026-04-09 00:47:31 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:31.919758 | orchestrator | 2026-04-09 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:34.968247 | orchestrator | 2026-04-09 00:47:34 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:34.968300 | orchestrator | 2026-04-09 00:47:34 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:34.970127 | orchestrator | 2026-04-09 00:47:34 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:34.970187 | orchestrator | 2026-04-09 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:38.066379 | orchestrator | 2026-04-09 00:47:38 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:38.066457 | orchestrator | 2026-04-09 00:47:38 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:38.066465 | orchestrator | 2026-04-09 00:47:38 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:38.066530 | orchestrator | 2026-04-09 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:41.128814 | orchestrator | 2026-04-09 00:47:41 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:41.130135 | orchestrator | 2026-04-09 00:47:41 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:41.132333 | orchestrator | 2026-04-09 00:47:41 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:41.132375 | orchestrator | 2026-04-09 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:44.181070 | orchestrator | 2026-04-09 00:47:44 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:44.183384 | orchestrator | 2026-04-09 00:47:44 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:44.184290 | orchestrator | 2026-04-09 00:47:44 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state STARTED 2026-04-09 00:47:44.184324 | orchestrator | 2026-04-09 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:47.233685 | orchestrator | 2026-04-09 00:47:47 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:47.235060 | orchestrator | 2026-04-09 00:47:47 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:47:47.235971 | orchestrator | 2026-04-09 00:47:47 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:47:47.237409 | orchestrator | 2026-04-09 00:47:47 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:47.238260 | orchestrator | 2026-04-09 00:47:47 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:47:47.239867 | orchestrator | 2026-04-09 00:47:47 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:47:47.247028 | orchestrator | 2026-04-09 00:47:47 | INFO  | Task 1c3bbbdb-8bb9-46f6-8720-d13d00ef4cb1 is in state SUCCESS 2026-04-09 00:47:47.249250 | orchestrator | 2026-04-09 00:47:47.249296 | orchestrator | 2026-04-09 00:47:47.249304 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-09 00:47:47.249310 | orchestrator | 2026-04-09 00:47:47.249316 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-09 00:47:47.249339 | orchestrator | Thursday 09 April 2026 00:45:31 +0000 (0:00:00.292) 0:00:00.292 ******** 2026-04-09 00:47:47.249345 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:47:47.249351 | orchestrator | 2026-04-09 00:47:47.249360 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-09 00:47:47.249366 | orchestrator | Thursday 09 April 2026 00:45:32 +0000 (0:00:01.240) 0:00:01.533 ******** 2026-04-09 00:47:47.249371 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:47:47.249376 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:47:47.249381 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:47:47.249384 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:47:47.249388 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:47:47.249392 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:47:47.249405 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:47:47.249408 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:47:47.249411 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:47:47.249414 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:47:47.249418 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:47:47.249421 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:47:47.249424 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:47:47.249427 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:47:47.249431 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:47:47.249434 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:47:47.249437 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:47:47.249448 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:47:47.249451 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:47:47.249454 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:47:47.249457 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:47:47.249460 | orchestrator | 2026-04-09 00:47:47.249463 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-09 00:47:47.249466 | orchestrator | Thursday 09 April 2026 00:45:36 +0000 (0:00:04.668) 0:00:06.202 ******** 2026-04-09 00:47:47.249469 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:47:47.249473 | orchestrator | 2026-04-09 00:47:47.249480 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-09 00:47:47.249483 | orchestrator | Thursday 09 April 2026 00:45:38 +0000 (0:00:01.133) 0:00:07.335 ******** 2026-04-09 00:47:47.249511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.249520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.249541 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.249552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.249557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.249563 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.249569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.249574 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249628 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.249671 | orchestrator | 2026-04-09 00:47:47.249676 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-09 00:47:47.249681 | orchestrator | Thursday 09 April 2026 00:45:42 +0000 (0:00:04.752) 0:00:12.087 ******** 2026-04-09 00:47:47.249686 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249696 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249702 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:47:47.249708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249733 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:47:47.249742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249754 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:47:47.249757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249769 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:47:47.249775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249787 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:47:47.249790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249800 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:47:47.249806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249820 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:47:47.249823 | orchestrator | 2026-04-09 00:47:47.249826 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-09 00:47:47.249830 | orchestrator | Thursday 09 April 2026 00:45:44 +0000 (0:00:01.977) 0:00:14.065 ******** 2026-04-09 00:47:47.249833 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249836 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249840 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249875 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:47:47.249879 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:47:47.249882 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:47:47.249886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.249900 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:47:47.249904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.249910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/r2026-04-09 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:47.250163 | orchestrator | un/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.250209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.250219 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:47:47.250226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.250233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.250238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.250252 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:47:47.250258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:47:47.250264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.250270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.250276 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:47:47.250281 | orchestrator | 2026-04-09 00:47:47.250287 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-09 00:47:47.250294 | orchestrator | Thursday 09 April 2026 00:45:47 +0000 (0:00:02.519) 0:00:16.585 ******** 2026-04-09 00:47:47.250299 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:47:47.250305 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:47:47.250310 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:47:47.250316 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:47:47.250322 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:47:47.250334 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:47:47.250340 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:47:47.250345 | orchestrator | 2026-04-09 00:47:47.250351 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-09 00:47:47.250357 | orchestrator | Thursday 09 April 2026 00:45:48 +0000 (0:00:01.023) 0:00:17.608 ******** 2026-04-09 00:47:47.250363 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:47:47.250368 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:47:47.250377 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:47:47.250383 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:47:47.250389 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:47:47.250394 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:47:47.250400 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:47:47.250405 | orchestrator | 2026-04-09 00:47:47.250411 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-09 00:47:47.250417 | orchestrator | Thursday 09 April 2026 00:45:49 +0000 (0:00:01.008) 0:00:18.617 ******** 2026-04-09 00:47:47.250423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.250429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.250440 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.250446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.250452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.250483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.250584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.250596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250618 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250646 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250652 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.250664 | orchestrator | 2026-04-09 00:47:47.250670 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-09 00:47:47.250676 | orchestrator | Thursday 09 April 2026 00:45:58 +0000 (0:00:09.061) 0:00:27.678 ******** 2026-04-09 00:47:47.250682 | orchestrator | [WARNING]: Skipped 2026-04-09 00:47:47.250689 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-09 00:47:47.250695 | orchestrator | to this access issue: 2026-04-09 00:47:47.250701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-09 00:47:47.250711 | orchestrator | directory 2026-04-09 00:47:47.250717 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:47:47.250723 | orchestrator | 2026-04-09 00:47:47.250729 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-09 00:47:47.250735 | orchestrator | Thursday 09 April 2026 00:45:59 +0000 (0:00:01.040) 0:00:28.719 ******** 2026-04-09 00:47:47.250740 | orchestrator | [WARNING]: Skipped 2026-04-09 00:47:47.250745 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-09 00:47:47.250755 | orchestrator | to this access issue: 2026-04-09 00:47:47.250761 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-09 00:47:47.250767 | orchestrator | directory 2026-04-09 00:47:47.250772 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:47:47.250784 | orchestrator | 2026-04-09 00:47:47.250791 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-09 00:47:47.250798 | orchestrator | Thursday 09 April 2026 00:46:00 +0000 (0:00:01.338) 0:00:30.057 ******** 2026-04-09 00:47:47.250806 | orchestrator | [WARNING]: Skipped 2026-04-09 00:47:47.250813 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-09 00:47:47.250818 | orchestrator | to this access issue: 2026-04-09 00:47:47.250824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-09 00:47:47.250830 | orchestrator | directory 2026-04-09 00:47:47.250835 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:47:47.250842 | orchestrator | 2026-04-09 00:47:47.250848 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-09 00:47:47.250853 | orchestrator | Thursday 09 April 2026 00:46:01 +0000 (0:00:00.839) 0:00:30.896 ******** 2026-04-09 00:47:47.250859 | orchestrator | [WARNING]: Skipped 2026-04-09 00:47:47.250864 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-09 00:47:47.250870 | orchestrator | to this access issue: 2026-04-09 00:47:47.250875 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-09 00:47:47.250881 | orchestrator | directory 2026-04-09 00:47:47.250887 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:47:47.250892 | orchestrator | 2026-04-09 00:47:47.250898 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-09 00:47:47.250904 | orchestrator | Thursday 09 April 2026 00:46:02 +0000 (0:00:00.859) 0:00:31.755 ******** 2026-04-09 00:47:47.250909 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:47.250915 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:47.250920 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:47.250926 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:47.250931 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:47.250936 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:47.250942 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:47.250948 | orchestrator | 2026-04-09 00:47:47.250953 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-09 00:47:47.250959 | orchestrator | Thursday 09 April 2026 00:46:09 +0000 (0:00:07.213) 0:00:38.969 ******** 2026-04-09 00:47:47.250965 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:47:47.250971 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:47:47.250977 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:47:47.250982 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:47:47.250988 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:47:47.250994 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:47:47.250999 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:47:47.251005 | orchestrator | 2026-04-09 00:47:47.251011 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-09 00:47:47.251016 | orchestrator | Thursday 09 April 2026 00:46:13 +0000 (0:00:03.706) 0:00:42.675 ******** 2026-04-09 00:47:47.251021 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:47.251027 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:47.251033 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:47.251039 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:47.251044 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:47.251049 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:47.251055 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:47.251066 | orchestrator | 2026-04-09 00:47:47.251070 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-09 00:47:47.251074 | orchestrator | Thursday 09 April 2026 00:46:17 +0000 (0:00:03.687) 0:00:46.363 ******** 2026-04-09 00:47:47.251078 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251087 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.251094 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.251102 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251105 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.251117 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251120 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.251133 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251137 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.251145 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251149 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251156 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.251166 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:47:47.251174 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251178 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251181 | orchestrator | 2026-04-09 00:47:47.251184 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-09 00:47:47.251187 | orchestrator | Thursday 09 April 2026 00:46:20 +0000 (0:00:03.199) 0:00:49.562 ******** 2026-04-09 00:47:47.251190 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:47:47.251194 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:47:47.251197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:47:47.251203 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:47:47.251206 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:47:47.251209 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:47:47.251212 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:47:47.251216 | orchestrator | 2026-04-09 00:47:47.251219 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-09 00:47:47.251223 | orchestrator | Thursday 09 April 2026 00:46:23 +0000 (0:00:03.458) 0:00:53.021 ******** 2026-04-09 00:47:47.251226 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:47:47.251229 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:47:47.251232 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:47:47.251236 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:47:47.251239 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:47:47.251242 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:47:47.251245 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:47:47.251248 | orchestrator | 2026-04-09 00:47:47.251251 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-09 00:47:47.251255 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:02.674) 0:00:55.695 ******** 2026-04-09 00:47:47.251258 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251278 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:47:47.251287 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251311 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251315 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:47:47.251349 | orchestrator | 2026-04-09 00:47:47.251353 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-09 00:47:47.251356 | orchestrator | Thursday 09 April 2026 00:46:29 +0000 (0:00:02.978) 0:00:58.674 ******** 2026-04-09 00:47:47.251359 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:47.251362 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:47.251365 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:47.251368 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:47.251372 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:47.251375 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:47.251378 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:47.251381 | orchestrator | 2026-04-09 00:47:47.251384 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-09 00:47:47.251387 | orchestrator | Thursday 09 April 2026 00:46:30 +0000 (0:00:01.303) 0:00:59.978 ******** 2026-04-09 00:47:47.251391 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:47.251394 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:47.251397 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:47.251400 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:47.251404 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:47.251407 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:47.251410 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:47.251413 | orchestrator | 2026-04-09 00:47:47.251417 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:47:47.251420 | orchestrator | Thursday 09 April 2026 00:46:31 +0000 (0:00:01.091) 0:01:01.069 ******** 2026-04-09 00:47:47.251423 | orchestrator | 2026-04-09 00:47:47.251426 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:47:47.251429 | orchestrator | Thursday 09 April 2026 00:46:31 +0000 (0:00:00.063) 0:01:01.132 ******** 2026-04-09 00:47:47.251432 | orchestrator | 2026-04-09 00:47:47.251436 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:47:47.251439 | orchestrator | Thursday 09 April 2026 00:46:31 +0000 (0:00:00.060) 0:01:01.193 ******** 2026-04-09 00:47:47.251442 | orchestrator | 2026-04-09 00:47:47.251445 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:47:47.251449 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:00.059) 0:01:01.252 ******** 2026-04-09 00:47:47.251452 | orchestrator | 2026-04-09 00:47:47.251455 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:47:47.251458 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:00.061) 0:01:01.314 ******** 2026-04-09 00:47:47.251461 | orchestrator | 2026-04-09 00:47:47.251465 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:47:47.251468 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:00.060) 0:01:01.374 ******** 2026-04-09 00:47:47.251471 | orchestrator | 2026-04-09 00:47:47.251474 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:47:47.251480 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:00.059) 0:01:01.434 ******** 2026-04-09 00:47:47.251484 | orchestrator | 2026-04-09 00:47:47.251487 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-09 00:47:47.251504 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:00.083) 0:01:01.517 ******** 2026-04-09 00:47:47.251510 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:47.251515 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:47.251520 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:47.251525 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:47.251530 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:47.251535 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:47.251539 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:47.251544 | orchestrator | 2026-04-09 00:47:47.251551 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-09 00:47:47.251557 | orchestrator | Thursday 09 April 2026 00:47:02 +0000 (0:00:30.264) 0:01:31.782 ******** 2026-04-09 00:47:47.251562 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:47.251566 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:47.251569 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:47.251572 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:47.251576 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:47.251579 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:47.251582 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:47.251585 | orchestrator | 2026-04-09 00:47:47.251588 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-09 00:47:47.251592 | orchestrator | Thursday 09 April 2026 00:47:34 +0000 (0:00:31.647) 0:02:03.429 ******** 2026-04-09 00:47:47.251595 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:47:47.251598 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:47:47.251601 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:47:47.251605 | orchestrator | ok: [testbed-manager] 2026-04-09 00:47:47.251608 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:47:47.251611 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:47:47.251614 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:47:47.251617 | orchestrator | 2026-04-09 00:47:47.251620 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-09 00:47:47.251623 | orchestrator | Thursday 09 April 2026 00:47:36 +0000 (0:00:02.496) 0:02:05.926 ******** 2026-04-09 00:47:47.251626 | orchestrator | changed: [testbed-manager] 2026-04-09 00:47:47.251629 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:47:47.251632 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:47:47.251636 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:47:47.251639 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:47:47.251642 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:47:47.251645 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:47:47.251648 | orchestrator | 2026-04-09 00:47:47.251651 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:47:47.251655 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:47:47.251659 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:47:47.251662 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:47:47.251665 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:47:47.251669 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:47:47.251672 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:47:47.251678 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:47:47.251682 | orchestrator | 2026-04-09 00:47:47.251685 | orchestrator | 2026-04-09 00:47:47.251688 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:47:47.251691 | orchestrator | Thursday 09 April 2026 00:47:45 +0000 (0:00:08.970) 0:02:14.896 ******** 2026-04-09 00:47:47.251695 | orchestrator | =============================================================================== 2026-04-09 00:47:47.251698 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.65s 2026-04-09 00:47:47.251701 | orchestrator | common : Restart fluentd container ------------------------------------- 30.26s 2026-04-09 00:47:47.251704 | orchestrator | common : Copying over config.json files for services -------------------- 9.06s 2026-04-09 00:47:47.251707 | orchestrator | common : Restart cron container ----------------------------------------- 8.97s 2026-04-09 00:47:47.251711 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 7.21s 2026-04-09 00:47:47.251714 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.75s 2026-04-09 00:47:47.251717 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.67s 2026-04-09 00:47:47.251720 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.71s 2026-04-09 00:47:47.251723 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.69s 2026-04-09 00:47:47.251726 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.46s 2026-04-09 00:47:47.251729 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.20s 2026-04-09 00:47:47.251732 | orchestrator | common : Check common containers ---------------------------------------- 2.98s 2026-04-09 00:47:47.251735 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.67s 2026-04-09 00:47:47.251739 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.52s 2026-04-09 00:47:47.251745 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.50s 2026-04-09 00:47:47.251748 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.98s 2026-04-09 00:47:47.251751 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.34s 2026-04-09 00:47:47.251754 | orchestrator | common : Creating log volume -------------------------------------------- 1.30s 2026-04-09 00:47:47.251757 | orchestrator | common : include_tasks -------------------------------------------------- 1.24s 2026-04-09 00:47:47.251760 | orchestrator | common : include_tasks -------------------------------------------------- 1.13s 2026-04-09 00:47:50.288470 | orchestrator | 2026-04-09 00:47:50 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:50.289385 | orchestrator | 2026-04-09 00:47:50 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:47:50.293825 | orchestrator | 2026-04-09 00:47:50 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:47:50.293883 | orchestrator | 2026-04-09 00:47:50 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:50.293889 | orchestrator | 2026-04-09 00:47:50 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:47:50.293893 | orchestrator | 2026-04-09 00:47:50 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:47:50.293898 | orchestrator | 2026-04-09 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:53.314941 | orchestrator | 2026-04-09 00:47:53 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:53.315364 | orchestrator | 2026-04-09 00:47:53 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:47:53.316056 | orchestrator | 2026-04-09 00:47:53 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:47:53.317195 | orchestrator | 2026-04-09 00:47:53 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:53.317776 | orchestrator | 2026-04-09 00:47:53 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:47:53.318376 | orchestrator | 2026-04-09 00:47:53 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:47:53.318404 | orchestrator | 2026-04-09 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:56.348443 | orchestrator | 2026-04-09 00:47:56 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:56.350683 | orchestrator | 2026-04-09 00:47:56 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:47:56.352625 | orchestrator | 2026-04-09 00:47:56 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:47:56.354808 | orchestrator | 2026-04-09 00:47:56 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:56.356993 | orchestrator | 2026-04-09 00:47:56 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:47:56.358606 | orchestrator | 2026-04-09 00:47:56 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:47:56.358901 | orchestrator | 2026-04-09 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:59.390748 | orchestrator | 2026-04-09 00:47:59 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:47:59.391428 | orchestrator | 2026-04-09 00:47:59 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:47:59.392763 | orchestrator | 2026-04-09 00:47:59 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:47:59.393384 | orchestrator | 2026-04-09 00:47:59 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:47:59.394314 | orchestrator | 2026-04-09 00:47:59 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:47:59.395384 | orchestrator | 2026-04-09 00:47:59 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:47:59.395864 | orchestrator | 2026-04-09 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:02.441285 | orchestrator | 2026-04-09 00:48:02 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:02.442131 | orchestrator | 2026-04-09 00:48:02 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:48:02.443197 | orchestrator | 2026-04-09 00:48:02 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:02.445607 | orchestrator | 2026-04-09 00:48:02 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:02.446417 | orchestrator | 2026-04-09 00:48:02 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:02.447572 | orchestrator | 2026-04-09 00:48:02 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:48:02.447602 | orchestrator | 2026-04-09 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:05.562914 | orchestrator | 2026-04-09 00:48:05 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:05.562974 | orchestrator | 2026-04-09 00:48:05 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:48:05.563001 | orchestrator | 2026-04-09 00:48:05 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:05.563009 | orchestrator | 2026-04-09 00:48:05 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:05.563017 | orchestrator | 2026-04-09 00:48:05 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:05.563024 | orchestrator | 2026-04-09 00:48:05 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:48:05.563032 | orchestrator | 2026-04-09 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:08.545034 | orchestrator | 2026-04-09 00:48:08 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:08.546950 | orchestrator | 2026-04-09 00:48:08 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:48:08.547648 | orchestrator | 2026-04-09 00:48:08 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:08.548604 | orchestrator | 2026-04-09 00:48:08 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:08.549444 | orchestrator | 2026-04-09 00:48:08 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:08.550262 | orchestrator | 2026-04-09 00:48:08 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:48:08.550292 | orchestrator | 2026-04-09 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:11.586257 | orchestrator | 2026-04-09 00:48:11 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:11.587304 | orchestrator | 2026-04-09 00:48:11 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:48:11.588265 | orchestrator | 2026-04-09 00:48:11 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:11.589407 | orchestrator | 2026-04-09 00:48:11 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:11.591811 | orchestrator | 2026-04-09 00:48:11 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:11.593382 | orchestrator | 2026-04-09 00:48:11 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:48:11.593425 | orchestrator | 2026-04-09 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:14.627336 | orchestrator | 2026-04-09 00:48:14 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:14.629235 | orchestrator | 2026-04-09 00:48:14 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:48:14.631327 | orchestrator | 2026-04-09 00:48:14 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:14.633417 | orchestrator | 2026-04-09 00:48:14 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:14.635717 | orchestrator | 2026-04-09 00:48:14 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:14.637126 | orchestrator | 2026-04-09 00:48:14 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:48:14.637185 | orchestrator | 2026-04-09 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:17.682810 | orchestrator | 2026-04-09 00:48:17 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:17.685159 | orchestrator | 2026-04-09 00:48:17 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:48:17.686903 | orchestrator | 2026-04-09 00:48:17 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:17.689128 | orchestrator | 2026-04-09 00:48:17 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:17.690627 | orchestrator | 2026-04-09 00:48:17 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:17.692151 | orchestrator | 2026-04-09 00:48:17 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state STARTED 2026-04-09 00:48:17.692184 | orchestrator | 2026-04-09 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:20.730141 | orchestrator | 2026-04-09 00:48:20 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:20.730824 | orchestrator | 2026-04-09 00:48:20 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:48:20.731993 | orchestrator | 2026-04-09 00:48:20 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:20.733627 | orchestrator | 2026-04-09 00:48:20 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:20.734819 | orchestrator | 2026-04-09 00:48:20 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:20.735743 | orchestrator | 2026-04-09 00:48:20 | INFO  | Task 373518e3-2b11-44e7-977f-c5c0e30686ed is in state SUCCESS 2026-04-09 00:48:20.736017 | orchestrator | 2026-04-09 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:23.768026 | orchestrator | 2026-04-09 00:48:23 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:23.768659 | orchestrator | 2026-04-09 00:48:23 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state STARTED 2026-04-09 00:48:23.769655 | orchestrator | 2026-04-09 00:48:23 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:23.770273 | orchestrator | 2026-04-09 00:48:23 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:23.771065 | orchestrator | 2026-04-09 00:48:23 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:23.773424 | orchestrator | 2026-04-09 00:48:23 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:23.773462 | orchestrator | 2026-04-09 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:26.812455 | orchestrator | 2026-04-09 00:48:26 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:26.814697 | orchestrator | 2026-04-09 00:48:26.814745 | orchestrator | 2026-04-09 00:48:26.814750 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:48:26.814755 | orchestrator | 2026-04-09 00:48:26.814758 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:48:26.814762 | orchestrator | Thursday 09 April 2026 00:47:51 +0000 (0:00:00.631) 0:00:00.631 ******** 2026-04-09 00:48:26.814765 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:26.814769 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:26.814773 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:26.814778 | orchestrator | 2026-04-09 00:48:26.814786 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:48:26.814793 | orchestrator | Thursday 09 April 2026 00:47:51 +0000 (0:00:00.395) 0:00:01.027 ******** 2026-04-09 00:48:26.814798 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-09 00:48:26.814803 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-09 00:48:26.814808 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-09 00:48:26.814828 | orchestrator | 2026-04-09 00:48:26.814834 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-09 00:48:26.814839 | orchestrator | 2026-04-09 00:48:26.814844 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-09 00:48:26.814849 | orchestrator | Thursday 09 April 2026 00:47:52 +0000 (0:00:00.303) 0:00:01.331 ******** 2026-04-09 00:48:26.814855 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:48:26.814859 | orchestrator | 2026-04-09 00:48:26.814862 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-09 00:48:26.814865 | orchestrator | Thursday 09 April 2026 00:47:52 +0000 (0:00:00.806) 0:00:02.137 ******** 2026-04-09 00:48:26.814868 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-09 00:48:26.814871 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-09 00:48:26.814875 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-09 00:48:26.814878 | orchestrator | 2026-04-09 00:48:26.814881 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-09 00:48:26.814884 | orchestrator | Thursday 09 April 2026 00:47:54 +0000 (0:00:01.591) 0:00:03.728 ******** 2026-04-09 00:48:26.814887 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-09 00:48:26.814890 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-09 00:48:26.814893 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-09 00:48:26.814896 | orchestrator | 2026-04-09 00:48:26.814899 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-09 00:48:26.814902 | orchestrator | Thursday 09 April 2026 00:47:56 +0000 (0:00:01.996) 0:00:05.724 ******** 2026-04-09 00:48:26.814905 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:26.814908 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:26.814911 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:26.814914 | orchestrator | 2026-04-09 00:48:26.814917 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-09 00:48:26.814920 | orchestrator | Thursday 09 April 2026 00:47:59 +0000 (0:00:02.881) 0:00:08.606 ******** 2026-04-09 00:48:26.814923 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:26.814932 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:26.814935 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:26.814938 | orchestrator | 2026-04-09 00:48:26.814941 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:48:26.814944 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:48:26.814949 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:48:26.814952 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:48:26.814955 | orchestrator | 2026-04-09 00:48:26.814958 | orchestrator | 2026-04-09 00:48:26.814961 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:48:26.814964 | orchestrator | Thursday 09 April 2026 00:48:19 +0000 (0:00:20.495) 0:00:29.102 ******** 2026-04-09 00:48:26.814968 | orchestrator | =============================================================================== 2026-04-09 00:48:26.814973 | orchestrator | memcached : Restart memcached container -------------------------------- 20.50s 2026-04-09 00:48:26.814978 | orchestrator | memcached : Check memcached container ----------------------------------- 2.88s 2026-04-09 00:48:26.814982 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.00s 2026-04-09 00:48:26.814987 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.59s 2026-04-09 00:48:26.814992 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.81s 2026-04-09 00:48:26.815001 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2026-04-09 00:48:26.815006 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.30s 2026-04-09 00:48:26.815011 | orchestrator | 2026-04-09 00:48:26.815016 | orchestrator | 2026-04-09 00:48:26.815093 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:48:26.815099 | orchestrator | 2026-04-09 00:48:26.815104 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:48:26.815109 | orchestrator | Thursday 09 April 2026 00:47:51 +0000 (0:00:00.411) 0:00:00.411 ******** 2026-04-09 00:48:26.815114 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:26.815119 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:26.815124 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:26.815129 | orchestrator | 2026-04-09 00:48:26.815134 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:48:26.815149 | orchestrator | Thursday 09 April 2026 00:47:51 +0000 (0:00:00.292) 0:00:00.703 ******** 2026-04-09 00:48:26.815155 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-09 00:48:26.815159 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-09 00:48:26.815164 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-09 00:48:26.815169 | orchestrator | 2026-04-09 00:48:26.815174 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-09 00:48:26.815180 | orchestrator | 2026-04-09 00:48:26.815185 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-09 00:48:26.815190 | orchestrator | Thursday 09 April 2026 00:47:51 +0000 (0:00:00.253) 0:00:00.957 ******** 2026-04-09 00:48:26.815195 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:48:26.815206 | orchestrator | 2026-04-09 00:48:26.815210 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-09 00:48:26.815215 | orchestrator | Thursday 09 April 2026 00:47:52 +0000 (0:00:00.657) 0:00:01.614 ******** 2026-04-09 00:48:26.815222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815270 | orchestrator | 2026-04-09 00:48:26.815273 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-09 00:48:26.815276 | orchestrator | Thursday 09 April 2026 00:47:54 +0000 (0:00:01.567) 0:00:03.182 ******** 2026-04-09 00:48:26.815279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815307 | orchestrator | 2026-04-09 00:48:26.815311 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-09 00:48:26.815314 | orchestrator | Thursday 09 April 2026 00:47:56 +0000 (0:00:02.919) 0:00:06.101 ******** 2026-04-09 00:48:26.815317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815340 | orchestrator | 2026-04-09 00:48:26.815346 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-09 00:48:26.815351 | orchestrator | Thursday 09 April 2026 00:48:00 +0000 (0:00:03.558) 0:00:09.659 ******** 2026-04-09 00:48:26.815356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:48:26.815401 | orchestrator | 2026-04-09 00:48:26.815406 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 00:48:26.815411 | orchestrator | Thursday 09 April 2026 00:48:02 +0000 (0:00:01.882) 0:00:11.542 ******** 2026-04-09 00:48:26.815416 | orchestrator | 2026-04-09 00:48:26.815422 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 00:48:26.815428 | orchestrator | Thursday 09 April 2026 00:48:02 +0000 (0:00:00.313) 0:00:11.855 ******** 2026-04-09 00:48:26.815432 | orchestrator | 2026-04-09 00:48:26.815435 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 00:48:26.815438 | orchestrator | Thursday 09 April 2026 00:48:02 +0000 (0:00:00.140) 0:00:11.996 ******** 2026-04-09 00:48:26.815441 | orchestrator | 2026-04-09 00:48:26.815444 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-09 00:48:26.815447 | orchestrator | Thursday 09 April 2026 00:48:03 +0000 (0:00:00.209) 0:00:12.206 ******** 2026-04-09 00:48:26.815450 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:26.815453 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:26.815456 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:26.815459 | orchestrator | 2026-04-09 00:48:26.815462 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-09 00:48:26.815465 | orchestrator | Thursday 09 April 2026 00:48:20 +0000 (0:00:17.059) 0:00:29.265 ******** 2026-04-09 00:48:26.815468 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:26.815471 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:26.815475 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:26.815478 | orchestrator | 2026-04-09 00:48:26.815495 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:48:26.815498 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:48:26.815501 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:48:26.815507 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:48:26.815510 | orchestrator | 2026-04-09 00:48:26.815513 | orchestrator | 2026-04-09 00:48:26.815516 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:48:26.815520 | orchestrator | Thursday 09 April 2026 00:48:24 +0000 (0:00:03.973) 0:00:33.238 ******** 2026-04-09 00:48:26.815523 | orchestrator | =============================================================================== 2026-04-09 00:48:26.815526 | orchestrator | redis : Restart redis container ---------------------------------------- 17.06s 2026-04-09 00:48:26.815529 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.97s 2026-04-09 00:48:26.815532 | orchestrator | redis : Copying over redis config files --------------------------------- 3.56s 2026-04-09 00:48:26.815535 | orchestrator | redis : Copying over default config.json files -------------------------- 2.92s 2026-04-09 00:48:26.815538 | orchestrator | redis : Check redis containers ------------------------------------------ 1.88s 2026-04-09 00:48:26.815541 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.57s 2026-04-09 00:48:26.815544 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.66s 2026-04-09 00:48:26.815547 | orchestrator | redis : include_tasks --------------------------------------------------- 0.66s 2026-04-09 00:48:26.815552 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-04-09 00:48:26.815555 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.25s 2026-04-09 00:48:26.815558 | orchestrator | 2026-04-09 00:48:26 | INFO  | Task bb04dc12-5527-45cf-86fa-29e3aa96d14b is in state SUCCESS 2026-04-09 00:48:26.815561 | orchestrator | 2026-04-09 00:48:26 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:26.815565 | orchestrator | 2026-04-09 00:48:26 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:26.817999 | orchestrator | 2026-04-09 00:48:26 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:26.818119 | orchestrator | 2026-04-09 00:48:26 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:26.818127 | orchestrator | 2026-04-09 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:29.846208 | orchestrator | 2026-04-09 00:48:29 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:29.846608 | orchestrator | 2026-04-09 00:48:29 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:29.848068 | orchestrator | 2026-04-09 00:48:29 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:29.849057 | orchestrator | 2026-04-09 00:48:29 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:29.849894 | orchestrator | 2026-04-09 00:48:29 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:29.850084 | orchestrator | 2026-04-09 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:32.884552 | orchestrator | 2026-04-09 00:48:32 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:32.885730 | orchestrator | 2026-04-09 00:48:32 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:32.885801 | orchestrator | 2026-04-09 00:48:32 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:32.886357 | orchestrator | 2026-04-09 00:48:32 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:32.889594 | orchestrator | 2026-04-09 00:48:32 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:32.889628 | orchestrator | 2026-04-09 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:35.948232 | orchestrator | 2026-04-09 00:48:35 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:35.954686 | orchestrator | 2026-04-09 00:48:35 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:35.955339 | orchestrator | 2026-04-09 00:48:35 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:35.956838 | orchestrator | 2026-04-09 00:48:35 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:35.957695 | orchestrator | 2026-04-09 00:48:35 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:35.958137 | orchestrator | 2026-04-09 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:39.079657 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:39.079827 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:39.080392 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:39.081537 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:39.081676 | orchestrator | 2026-04-09 00:48:39 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:39.081688 | orchestrator | 2026-04-09 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:42.137301 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:42.139878 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:42.143298 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:42.147227 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:42.149451 | orchestrator | 2026-04-09 00:48:42 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:42.149576 | orchestrator | 2026-04-09 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:45.186974 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:45.187414 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:45.188715 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:45.189172 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:45.193496 | orchestrator | 2026-04-09 00:48:45 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:45.193550 | orchestrator | 2026-04-09 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:48.309685 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:48.309792 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:48.310240 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:48.311128 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state STARTED 2026-04-09 00:48:48.311748 | orchestrator | 2026-04-09 00:48:48 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:48.311786 | orchestrator | 2026-04-09 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:51.336327 | orchestrator | 2026-04-09 00:48:51 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:51.336939 | orchestrator | 2026-04-09 00:48:51 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:51.337892 | orchestrator | 2026-04-09 00:48:51 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:51.339276 | orchestrator | 2026-04-09 00:48:51 | INFO  | Task 5ed6ce5d-0940-47e1-b3f9-655e8e1e6d46 is in state SUCCESS 2026-04-09 00:48:51.340549 | orchestrator | 2026-04-09 00:48:51.340596 | orchestrator | 2026-04-09 00:48:51.340605 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:48:51.340613 | orchestrator | 2026-04-09 00:48:51.340620 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:48:51.340628 | orchestrator | Thursday 09 April 2026 00:47:52 +0000 (0:00:00.535) 0:00:00.535 ******** 2026-04-09 00:48:51.340634 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:51.340642 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:51.340649 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:51.340655 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:48:51.340660 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:48:51.340666 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:48:51.340672 | orchestrator | 2026-04-09 00:48:51.340678 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:48:51.340684 | orchestrator | Thursday 09 April 2026 00:47:52 +0000 (0:00:00.863) 0:00:01.399 ******** 2026-04-09 00:48:51.340691 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:48:51.340698 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:48:51.340704 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:48:51.340710 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:48:51.340717 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:48:51.340723 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:48:51.340729 | orchestrator | 2026-04-09 00:48:51.340736 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-09 00:48:51.340752 | orchestrator | 2026-04-09 00:48:51.340759 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-09 00:48:51.340773 | orchestrator | Thursday 09 April 2026 00:47:53 +0000 (0:00:00.868) 0:00:02.268 ******** 2026-04-09 00:48:51.340780 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:48:51.340788 | orchestrator | 2026-04-09 00:48:51.340794 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 00:48:51.340801 | orchestrator | Thursday 09 April 2026 00:47:55 +0000 (0:00:01.716) 0:00:03.984 ******** 2026-04-09 00:48:51.340807 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-09 00:48:51.340813 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-09 00:48:51.340820 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-09 00:48:51.340826 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-09 00:48:51.340832 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-09 00:48:51.340862 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-09 00:48:51.340868 | orchestrator | 2026-04-09 00:48:51.340875 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 00:48:51.340896 | orchestrator | Thursday 09 April 2026 00:47:56 +0000 (0:00:01.329) 0:00:05.314 ******** 2026-04-09 00:48:51.340903 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-09 00:48:51.340909 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-09 00:48:51.340915 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-09 00:48:51.340922 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-09 00:48:51.340928 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-09 00:48:51.340934 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-09 00:48:51.340941 | orchestrator | 2026-04-09 00:48:51.340947 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 00:48:51.340953 | orchestrator | Thursday 09 April 2026 00:47:59 +0000 (0:00:02.449) 0:00:07.763 ******** 2026-04-09 00:48:51.340960 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-09 00:48:51.340966 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:51.340973 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-09 00:48:51.340980 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:51.340986 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-09 00:48:51.340993 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:51.340999 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-09 00:48:51.341005 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:51.341012 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-09 00:48:51.341018 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:51.341025 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-09 00:48:51.341031 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:51.341037 | orchestrator | 2026-04-09 00:48:51.341044 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-09 00:48:51.341050 | orchestrator | Thursday 09 April 2026 00:48:01 +0000 (0:00:02.157) 0:00:09.921 ******** 2026-04-09 00:48:51.341056 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:51.341062 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:51.341069 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:51.341074 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:51.341080 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:51.341086 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:51.341093 | orchestrator | 2026-04-09 00:48:51.341099 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-09 00:48:51.341105 | orchestrator | Thursday 09 April 2026 00:48:02 +0000 (0:00:01.162) 0:00:11.083 ******** 2026-04-09 00:48:51.341128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341345 | orchestrator | 2026-04-09 00:48:51.341352 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-09 00:48:51.341359 | orchestrator | Thursday 09 April 2026 00:48:04 +0000 (0:00:02.397) 0:00:13.480 ******** 2026-04-09 00:48:51.341365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341416 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341515 | orchestrator | 2026-04-09 00:48:51.341529 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-09 00:48:51.341535 | orchestrator | Thursday 09 April 2026 00:48:07 +0000 (0:00:02.933) 0:00:16.414 ******** 2026-04-09 00:48:51.341549 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:51.341555 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:51.341561 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:51.341567 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:51.341581 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:51.341587 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:51.341591 | orchestrator | 2026-04-09 00:48:51.341595 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-09 00:48:51.341599 | orchestrator | Thursday 09 April 2026 00:48:08 +0000 (0:00:00.927) 0:00:17.341 ******** 2026-04-09 00:48:51.341604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341735 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:48:51.341741 | orchestrator | 2026-04-09 00:48:51.341748 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:48:51.341762 | orchestrator | Thursday 09 April 2026 00:48:10 +0000 (0:00:02.149) 0:00:19.491 ******** 2026-04-09 00:48:51.341769 | orchestrator | 2026-04-09 00:48:51.341775 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:48:51.341782 | orchestrator | Thursday 09 April 2026 00:48:11 +0000 (0:00:00.130) 0:00:19.621 ******** 2026-04-09 00:48:51.341796 | orchestrator | 2026-04-09 00:48:51.341803 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:48:51.341809 | orchestrator | Thursday 09 April 2026 00:48:11 +0000 (0:00:00.157) 0:00:19.779 ******** 2026-04-09 00:48:51.341815 | orchestrator | 2026-04-09 00:48:51.341822 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:48:51.341828 | orchestrator | Thursday 09 April 2026 00:48:11 +0000 (0:00:00.190) 0:00:19.969 ******** 2026-04-09 00:48:51.341834 | orchestrator | 2026-04-09 00:48:51.341841 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:48:51.341847 | orchestrator | Thursday 09 April 2026 00:48:11 +0000 (0:00:00.273) 0:00:20.242 ******** 2026-04-09 00:48:51.341853 | orchestrator | 2026-04-09 00:48:51.341864 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:48:51.341871 | orchestrator | Thursday 09 April 2026 00:48:11 +0000 (0:00:00.135) 0:00:20.378 ******** 2026-04-09 00:48:51.341877 | orchestrator | 2026-04-09 00:48:51.341884 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-09 00:48:51.341890 | orchestrator | Thursday 09 April 2026 00:48:11 +0000 (0:00:00.131) 0:00:20.509 ******** 2026-04-09 00:48:51.341896 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:51.341903 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:51.341909 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:51.341916 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:51.341922 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:51.341928 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:51.341935 | orchestrator | 2026-04-09 00:48:51.341941 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-09 00:48:51.341953 | orchestrator | Thursday 09 April 2026 00:48:23 +0000 (0:00:11.214) 0:00:31.724 ******** 2026-04-09 00:48:51.341959 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:51.341966 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:51.341972 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:51.341979 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:48:51.341985 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:48:51.341992 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:48:51.341998 | orchestrator | 2026-04-09 00:48:51.342004 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-09 00:48:51.342011 | orchestrator | Thursday 09 April 2026 00:48:24 +0000 (0:00:01.226) 0:00:32.950 ******** 2026-04-09 00:48:51.342069 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:51.342077 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:51.342084 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:51.342091 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:51.342098 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:51.342105 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:51.342112 | orchestrator | 2026-04-09 00:48:51.342119 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-09 00:48:51.342126 | orchestrator | Thursday 09 April 2026 00:48:28 +0000 (0:00:03.881) 0:00:36.831 ******** 2026-04-09 00:48:51.342133 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-09 00:48:51.342141 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-09 00:48:51.342148 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-09 00:48:51.342154 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-09 00:48:51.342161 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-09 00:48:51.342173 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-09 00:48:51.342180 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-09 00:48:51.342187 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-09 00:48:51.342194 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-09 00:48:51.342200 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-09 00:48:51.342206 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-09 00:48:51.342212 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-09 00:48:51.342219 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:48:51.342225 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:48:51.342231 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:48:51.342238 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:48:51.342244 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:48:51.342250 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-09 00:48:51.342257 | orchestrator | 2026-04-09 00:48:51.342268 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-09 00:48:51.342276 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:06.154) 0:00:42.986 ******** 2026-04-09 00:48:51.342283 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-09 00:48:51.342290 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:51.342296 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-09 00:48:51.342303 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:51.342310 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-09 00:48:51.342317 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:51.342324 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-09 00:48:51.342334 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-09 00:48:51.342341 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-09 00:48:51.342348 | orchestrator | 2026-04-09 00:48:51.342355 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-09 00:48:51.342362 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:02.047) 0:00:45.034 ******** 2026-04-09 00:48:51.342369 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-09 00:48:51.342376 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:51.342382 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-09 00:48:51.342389 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:51.342396 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-09 00:48:51.342402 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:51.342409 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-09 00:48:51.342416 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-09 00:48:51.342423 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-09 00:48:51.342431 | orchestrator | 2026-04-09 00:48:51.342438 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-09 00:48:51.342446 | orchestrator | Thursday 09 April 2026 00:48:41 +0000 (0:00:04.563) 0:00:49.597 ******** 2026-04-09 00:48:51.342453 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:51.342460 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:51.342511 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:51.342517 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:51.342524 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:51.342530 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:51.342536 | orchestrator | 2026-04-09 00:48:51.342542 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:48:51.342549 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:48:51.342556 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:48:51.342563 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:48:51.342569 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 00:48:51.342575 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 00:48:51.342585 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 00:48:51.342591 | orchestrator | 2026-04-09 00:48:51.342597 | orchestrator | 2026-04-09 00:48:51.342603 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:48:51.342609 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:09.027) 0:00:58.625 ******** 2026-04-09 00:48:51.342620 | orchestrator | =============================================================================== 2026-04-09 00:48:51.342625 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 12.91s 2026-04-09 00:48:51.342628 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.21s 2026-04-09 00:48:51.342632 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.15s 2026-04-09 00:48:51.342636 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.56s 2026-04-09 00:48:51.342640 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.93s 2026-04-09 00:48:51.342644 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.45s 2026-04-09 00:48:51.342647 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.40s 2026-04-09 00:48:51.342651 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.16s 2026-04-09 00:48:51.342655 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.15s 2026-04-09 00:48:51.342659 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.05s 2026-04-09 00:48:51.342663 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.72s 2026-04-09 00:48:51.342667 | orchestrator | module-load : Load modules ---------------------------------------------- 1.33s 2026-04-09 00:48:51.342671 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.23s 2026-04-09 00:48:51.342676 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.16s 2026-04-09 00:48:51.342682 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.02s 2026-04-09 00:48:51.342688 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.93s 2026-04-09 00:48:51.342694 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-04-09 00:48:51.342700 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2026-04-09 00:48:51.342706 | orchestrator | 2026-04-09 00:48:51 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:51.342717 | orchestrator | 2026-04-09 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:54.367004 | orchestrator | 2026-04-09 00:48:54 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:54.367073 | orchestrator | 2026-04-09 00:48:54 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:54.367868 | orchestrator | 2026-04-09 00:48:54 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:54.371196 | orchestrator | 2026-04-09 00:48:54 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:48:54.371258 | orchestrator | 2026-04-09 00:48:54 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:54.371263 | orchestrator | 2026-04-09 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:57.404548 | orchestrator | 2026-04-09 00:48:57 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:48:57.404713 | orchestrator | 2026-04-09 00:48:57 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:48:57.406000 | orchestrator | 2026-04-09 00:48:57 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:48:57.407750 | orchestrator | 2026-04-09 00:48:57 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:48:57.408298 | orchestrator | 2026-04-09 00:48:57 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:48:57.408328 | orchestrator | 2026-04-09 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:00.437134 | orchestrator | 2026-04-09 00:49:00 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:00.437679 | orchestrator | 2026-04-09 00:49:00 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:00.438080 | orchestrator | 2026-04-09 00:49:00 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:00.438749 | orchestrator | 2026-04-09 00:49:00 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:00.439988 | orchestrator | 2026-04-09 00:49:00 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:00.440028 | orchestrator | 2026-04-09 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:03.495555 | orchestrator | 2026-04-09 00:49:03 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:03.497561 | orchestrator | 2026-04-09 00:49:03 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:03.499713 | orchestrator | 2026-04-09 00:49:03 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:03.502043 | orchestrator | 2026-04-09 00:49:03 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:03.503979 | orchestrator | 2026-04-09 00:49:03 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:03.504025 | orchestrator | 2026-04-09 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:06.537903 | orchestrator | 2026-04-09 00:49:06 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:06.538354 | orchestrator | 2026-04-09 00:49:06 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:06.539196 | orchestrator | 2026-04-09 00:49:06 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:06.539825 | orchestrator | 2026-04-09 00:49:06 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:06.540491 | orchestrator | 2026-04-09 00:49:06 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:06.540618 | orchestrator | 2026-04-09 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:09.583110 | orchestrator | 2026-04-09 00:49:09 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:09.583161 | orchestrator | 2026-04-09 00:49:09 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:09.583168 | orchestrator | 2026-04-09 00:49:09 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:09.583174 | orchestrator | 2026-04-09 00:49:09 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:09.586108 | orchestrator | 2026-04-09 00:49:09 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:09.586149 | orchestrator | 2026-04-09 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:12.604060 | orchestrator | 2026-04-09 00:49:12 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:12.604298 | orchestrator | 2026-04-09 00:49:12 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:12.605037 | orchestrator | 2026-04-09 00:49:12 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:12.605667 | orchestrator | 2026-04-09 00:49:12 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:12.607141 | orchestrator | 2026-04-09 00:49:12 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:12.607718 | orchestrator | 2026-04-09 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:15.642268 | orchestrator | 2026-04-09 00:49:15 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:15.643761 | orchestrator | 2026-04-09 00:49:15 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:15.645259 | orchestrator | 2026-04-09 00:49:15 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:15.647744 | orchestrator | 2026-04-09 00:49:15 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:15.648282 | orchestrator | 2026-04-09 00:49:15 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:15.649759 | orchestrator | 2026-04-09 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:18.681256 | orchestrator | 2026-04-09 00:49:18 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:18.681719 | orchestrator | 2026-04-09 00:49:18 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:18.682435 | orchestrator | 2026-04-09 00:49:18 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:18.683144 | orchestrator | 2026-04-09 00:49:18 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:18.683872 | orchestrator | 2026-04-09 00:49:18 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:18.683902 | orchestrator | 2026-04-09 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:21.715894 | orchestrator | 2026-04-09 00:49:21 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:21.717381 | orchestrator | 2026-04-09 00:49:21 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:21.719118 | orchestrator | 2026-04-09 00:49:21 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:21.720548 | orchestrator | 2026-04-09 00:49:21 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:21.722139 | orchestrator | 2026-04-09 00:49:21 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:21.722168 | orchestrator | 2026-04-09 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:24.755077 | orchestrator | 2026-04-09 00:49:24 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:24.755383 | orchestrator | 2026-04-09 00:49:24 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:24.758471 | orchestrator | 2026-04-09 00:49:24 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:24.759217 | orchestrator | 2026-04-09 00:49:24 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:24.759901 | orchestrator | 2026-04-09 00:49:24 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:24.761062 | orchestrator | 2026-04-09 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:27.799954 | orchestrator | 2026-04-09 00:49:27 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:27.800181 | orchestrator | 2026-04-09 00:49:27 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:27.801091 | orchestrator | 2026-04-09 00:49:27 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:27.803213 | orchestrator | 2026-04-09 00:49:27 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:27.803955 | orchestrator | 2026-04-09 00:49:27 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:27.804004 | orchestrator | 2026-04-09 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:30.843389 | orchestrator | 2026-04-09 00:49:30 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:30.846509 | orchestrator | 2026-04-09 00:49:30 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:30.851181 | orchestrator | 2026-04-09 00:49:30 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:30.852669 | orchestrator | 2026-04-09 00:49:30 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:30.857953 | orchestrator | 2026-04-09 00:49:30 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:30.858080 | orchestrator | 2026-04-09 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:33.902260 | orchestrator | 2026-04-09 00:49:33 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:33.902309 | orchestrator | 2026-04-09 00:49:33 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:33.903198 | orchestrator | 2026-04-09 00:49:33 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:33.905128 | orchestrator | 2026-04-09 00:49:33 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:33.908367 | orchestrator | 2026-04-09 00:49:33 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:33.908416 | orchestrator | 2026-04-09 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:37.034002 | orchestrator | 2026-04-09 00:49:37 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:37.034743 | orchestrator | 2026-04-09 00:49:37 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:37.035475 | orchestrator | 2026-04-09 00:49:37 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:37.036653 | orchestrator | 2026-04-09 00:49:37 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:37.037129 | orchestrator | 2026-04-09 00:49:37 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:37.037397 | orchestrator | 2026-04-09 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:40.111362 | orchestrator | 2026-04-09 00:49:40 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:40.111419 | orchestrator | 2026-04-09 00:49:40 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:40.111428 | orchestrator | 2026-04-09 00:49:40 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:40.111462 | orchestrator | 2026-04-09 00:49:40 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:40.111469 | orchestrator | 2026-04-09 00:49:40 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:40.111475 | orchestrator | 2026-04-09 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:43.104120 | orchestrator | 2026-04-09 00:49:43 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:43.107515 | orchestrator | 2026-04-09 00:49:43 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:43.108749 | orchestrator | 2026-04-09 00:49:43 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:43.112045 | orchestrator | 2026-04-09 00:49:43 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:43.113969 | orchestrator | 2026-04-09 00:49:43 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:43.114046 | orchestrator | 2026-04-09 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:46.148931 | orchestrator | 2026-04-09 00:49:46 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:46.149136 | orchestrator | 2026-04-09 00:49:46 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:46.149603 | orchestrator | 2026-04-09 00:49:46 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:46.150358 | orchestrator | 2026-04-09 00:49:46 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:46.153663 | orchestrator | 2026-04-09 00:49:46 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:46.153712 | orchestrator | 2026-04-09 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:49.186334 | orchestrator | 2026-04-09 00:49:49 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:49.189671 | orchestrator | 2026-04-09 00:49:49 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:49.189754 | orchestrator | 2026-04-09 00:49:49 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:49.190294 | orchestrator | 2026-04-09 00:49:49 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:49.191071 | orchestrator | 2026-04-09 00:49:49 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:49.191091 | orchestrator | 2026-04-09 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:52.412987 | orchestrator | 2026-04-09 00:49:52 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:52.413099 | orchestrator | 2026-04-09 00:49:52 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:52.413838 | orchestrator | 2026-04-09 00:49:52 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:52.415468 | orchestrator | 2026-04-09 00:49:52 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:52.417806 | orchestrator | 2026-04-09 00:49:52 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:52.417845 | orchestrator | 2026-04-09 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:55.627063 | orchestrator | 2026-04-09 00:49:55 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:55.627575 | orchestrator | 2026-04-09 00:49:55 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:55.627715 | orchestrator | 2026-04-09 00:49:55 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:55.629196 | orchestrator | 2026-04-09 00:49:55 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:55.629260 | orchestrator | 2026-04-09 00:49:55 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:55.629308 | orchestrator | 2026-04-09 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:58.692513 | orchestrator | 2026-04-09 00:49:58 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:49:58.692636 | orchestrator | 2026-04-09 00:49:58 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:49:58.694627 | orchestrator | 2026-04-09 00:49:58 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:49:58.695042 | orchestrator | 2026-04-09 00:49:58 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:49:58.695626 | orchestrator | 2026-04-09 00:49:58 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:49:58.696937 | orchestrator | 2026-04-09 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:01.783143 | orchestrator | 2026-04-09 00:50:01 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:01.783294 | orchestrator | 2026-04-09 00:50:01 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:01.783926 | orchestrator | 2026-04-09 00:50:01 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:50:01.784313 | orchestrator | 2026-04-09 00:50:01 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:01.784973 | orchestrator | 2026-04-09 00:50:01 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:01.785012 | orchestrator | 2026-04-09 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:05.154522 | orchestrator | 2026-04-09 00:50:05 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:05.157564 | orchestrator | 2026-04-09 00:50:05 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:05.157934 | orchestrator | 2026-04-09 00:50:05 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state STARTED 2026-04-09 00:50:05.158673 | orchestrator | 2026-04-09 00:50:05 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:05.160015 | orchestrator | 2026-04-09 00:50:05 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:05.160049 | orchestrator | 2026-04-09 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:08.189728 | orchestrator | 2026-04-09 00:50:08 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:08.190156 | orchestrator | 2026-04-09 00:50:08 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:08.191798 | orchestrator | 2026-04-09 00:50:08 | INFO  | Task 8fd2b506-c61a-4aff-855a-9b4d9b8a320d is in state SUCCESS 2026-04-09 00:50:08.193251 | orchestrator | 2026-04-09 00:50:08.193306 | orchestrator | 2026-04-09 00:50:08.193316 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-09 00:50:08.193324 | orchestrator | 2026-04-09 00:50:08.193333 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-09 00:50:08.193341 | orchestrator | Thursday 09 April 2026 00:45:31 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-04-09 00:50:08.193349 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.193357 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.193365 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.193372 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.193379 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.193386 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.193393 | orchestrator | 2026-04-09 00:50:08.193401 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-09 00:50:08.193496 | orchestrator | Thursday 09 April 2026 00:45:32 +0000 (0:00:00.658) 0:00:00.917 ******** 2026-04-09 00:50:08.193506 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.193514 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.193521 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.193529 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.193536 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.193543 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.193551 | orchestrator | 2026-04-09 00:50:08.193559 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-09 00:50:08.193567 | orchestrator | Thursday 09 April 2026 00:45:33 +0000 (0:00:00.751) 0:00:01.669 ******** 2026-04-09 00:50:08.193573 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.193581 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.193588 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.193596 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.193603 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.193610 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.193618 | orchestrator | 2026-04-09 00:50:08.193625 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-09 00:50:08.193632 | orchestrator | Thursday 09 April 2026 00:45:33 +0000 (0:00:00.590) 0:00:02.259 ******** 2026-04-09 00:50:08.193640 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.193648 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.193655 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.193663 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.193670 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.193678 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.193685 | orchestrator | 2026-04-09 00:50:08.193692 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-09 00:50:08.193700 | orchestrator | Thursday 09 April 2026 00:45:35 +0000 (0:00:02.066) 0:00:04.326 ******** 2026-04-09 00:50:08.193707 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.193714 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.193721 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.193728 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.193735 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.193742 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.193749 | orchestrator | 2026-04-09 00:50:08.193757 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-09 00:50:08.193764 | orchestrator | Thursday 09 April 2026 00:45:38 +0000 (0:00:02.093) 0:00:06.420 ******** 2026-04-09 00:50:08.193772 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.193779 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.193786 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.193794 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.193801 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.193808 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.193816 | orchestrator | 2026-04-09 00:50:08.193823 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-09 00:50:08.193831 | orchestrator | Thursday 09 April 2026 00:45:39 +0000 (0:00:01.499) 0:00:07.919 ******** 2026-04-09 00:50:08.193838 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.193845 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.193853 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.193861 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.193870 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.193879 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.193887 | orchestrator | 2026-04-09 00:50:08.193895 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-09 00:50:08.193904 | orchestrator | Thursday 09 April 2026 00:45:40 +0000 (0:00:01.018) 0:00:08.938 ******** 2026-04-09 00:50:08.193919 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.193927 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.193936 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.193944 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.193952 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.193961 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.193969 | orchestrator | 2026-04-09 00:50:08.193976 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-09 00:50:08.193982 | orchestrator | Thursday 09 April 2026 00:45:41 +0000 (0:00:00.579) 0:00:09.517 ******** 2026-04-09 00:50:08.193996 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.194003 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.194073 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.194084 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.194092 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.194100 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.194109 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.194117 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.194124 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.194132 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.194155 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.194163 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.194171 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.194178 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.194185 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.194192 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:50:08.194199 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:50:08.194206 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.194214 | orchestrator | 2026-04-09 00:50:08.194221 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-09 00:50:08.194228 | orchestrator | Thursday 09 April 2026 00:45:42 +0000 (0:00:01.287) 0:00:10.804 ******** 2026-04-09 00:50:08.194234 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.194241 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.194248 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.194255 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.194262 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.194269 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.194276 | orchestrator | 2026-04-09 00:50:08.194283 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-09 00:50:08.194292 | orchestrator | Thursday 09 April 2026 00:45:43 +0000 (0:00:01.502) 0:00:12.307 ******** 2026-04-09 00:50:08.194299 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.194306 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.194313 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.194320 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.194327 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.194334 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.194341 | orchestrator | 2026-04-09 00:50:08.194349 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-09 00:50:08.194356 | orchestrator | Thursday 09 April 2026 00:45:44 +0000 (0:00:00.891) 0:00:13.199 ******** 2026-04-09 00:50:08.194363 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.194370 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.194384 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.194391 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.194398 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.194405 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.194431 | orchestrator | 2026-04-09 00:50:08.194438 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-09 00:50:08.194446 | orchestrator | Thursday 09 April 2026 00:45:50 +0000 (0:00:05.892) 0:00:19.092 ******** 2026-04-09 00:50:08.194452 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.194459 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.194466 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.194474 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.194481 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.194488 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.194495 | orchestrator | 2026-04-09 00:50:08.194502 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-09 00:50:08.194509 | orchestrator | Thursday 09 April 2026 00:45:52 +0000 (0:00:02.272) 0:00:21.364 ******** 2026-04-09 00:50:08.194516 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.194524 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.194531 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.194538 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.194544 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.194551 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.194558 | orchestrator | 2026-04-09 00:50:08.194565 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-09 00:50:08.194574 | orchestrator | Thursday 09 April 2026 00:45:55 +0000 (0:00:02.138) 0:00:23.503 ******** 2026-04-09 00:50:08.194580 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.194587 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.194595 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.194602 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.194609 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.194616 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.194623 | orchestrator | 2026-04-09 00:50:08.194629 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-09 00:50:08.194635 | orchestrator | Thursday 09 April 2026 00:45:56 +0000 (0:00:00.947) 0:00:24.451 ******** 2026-04-09 00:50:08.194642 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-09 00:50:08.194649 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-09 00:50:08.194656 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.194664 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-09 00:50:08.194671 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-09 00:50:08.194678 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.194686 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-09 00:50:08.194697 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-09 00:50:08.194705 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.194712 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-09 00:50:08.194719 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-09 00:50:08.194726 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.194734 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-09 00:50:08.194741 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-09 00:50:08.194749 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.194756 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-09 00:50:08.194763 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-09 00:50:08.194771 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.194778 | orchestrator | 2026-04-09 00:50:08.194785 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-09 00:50:08.194805 | orchestrator | Thursday 09 April 2026 00:45:57 +0000 (0:00:01.370) 0:00:25.822 ******** 2026-04-09 00:50:08.194812 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.194820 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.194827 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.194834 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.194841 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.194848 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.194855 | orchestrator | 2026-04-09 00:50:08.194862 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-09 00:50:08.194870 | orchestrator | Thursday 09 April 2026 00:45:58 +0000 (0:00:01.114) 0:00:26.937 ******** 2026-04-09 00:50:08.194877 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.194885 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.194892 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.194899 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.194906 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.194913 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.194920 | orchestrator | 2026-04-09 00:50:08.194927 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-09 00:50:08.194934 | orchestrator | 2026-04-09 00:50:08.194941 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-09 00:50:08.194949 | orchestrator | Thursday 09 April 2026 00:46:00 +0000 (0:00:01.474) 0:00:28.411 ******** 2026-04-09 00:50:08.194955 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.194962 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.194969 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.194976 | orchestrator | 2026-04-09 00:50:08.194982 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-09 00:50:08.194988 | orchestrator | Thursday 09 April 2026 00:46:01 +0000 (0:00:01.578) 0:00:29.989 ******** 2026-04-09 00:50:08.194993 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.194999 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.195006 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.195013 | orchestrator | 2026-04-09 00:50:08.195020 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-09 00:50:08.195028 | orchestrator | Thursday 09 April 2026 00:46:02 +0000 (0:00:01.351) 0:00:31.340 ******** 2026-04-09 00:50:08.195035 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.195042 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.195049 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.195056 | orchestrator | 2026-04-09 00:50:08.195063 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-09 00:50:08.195070 | orchestrator | Thursday 09 April 2026 00:46:04 +0000 (0:00:01.214) 0:00:32.555 ******** 2026-04-09 00:50:08.195077 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.195084 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.195091 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.195098 | orchestrator | 2026-04-09 00:50:08.195105 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-09 00:50:08.195113 | orchestrator | Thursday 09 April 2026 00:46:06 +0000 (0:00:02.169) 0:00:34.726 ******** 2026-04-09 00:50:08.195119 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.195125 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.195132 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.195140 | orchestrator | 2026-04-09 00:50:08.195146 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-09 00:50:08.195154 | orchestrator | Thursday 09 April 2026 00:46:07 +0000 (0:00:00.909) 0:00:35.636 ******** 2026-04-09 00:50:08.195162 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.195169 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.195176 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.195183 | orchestrator | 2026-04-09 00:50:08.195190 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-09 00:50:08.195205 | orchestrator | Thursday 09 April 2026 00:46:08 +0000 (0:00:01.121) 0:00:36.757 ******** 2026-04-09 00:50:08.195212 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.195219 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.195226 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.195233 | orchestrator | 2026-04-09 00:50:08.195241 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-09 00:50:08.195248 | orchestrator | Thursday 09 April 2026 00:46:10 +0000 (0:00:02.083) 0:00:38.841 ******** 2026-04-09 00:50:08.195255 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:08.195262 | orchestrator | 2026-04-09 00:50:08.195270 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-09 00:50:08.195277 | orchestrator | Thursday 09 April 2026 00:46:11 +0000 (0:00:01.064) 0:00:39.905 ******** 2026-04-09 00:50:08.195285 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.195292 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.195300 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.195307 | orchestrator | 2026-04-09 00:50:08.195314 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-09 00:50:08.195321 | orchestrator | Thursday 09 April 2026 00:46:14 +0000 (0:00:02.618) 0:00:42.523 ******** 2026-04-09 00:50:08.195329 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.195336 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.195355 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.195363 | orchestrator | 2026-04-09 00:50:08.195370 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-09 00:50:08.195377 | orchestrator | Thursday 09 April 2026 00:46:14 +0000 (0:00:00.854) 0:00:43.378 ******** 2026-04-09 00:50:08.195384 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.195391 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.195398 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.195406 | orchestrator | 2026-04-09 00:50:08.195436 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-09 00:50:08.195444 | orchestrator | Thursday 09 April 2026 00:46:16 +0000 (0:00:01.313) 0:00:44.691 ******** 2026-04-09 00:50:08.195450 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.195458 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.195465 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.195472 | orchestrator | 2026-04-09 00:50:08.195479 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-09 00:50:08.195493 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:01.934) 0:00:46.625 ******** 2026-04-09 00:50:08.195500 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.195507 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.195514 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.195521 | orchestrator | 2026-04-09 00:50:08.195529 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-09 00:50:08.195536 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:00.593) 0:00:47.219 ******** 2026-04-09 00:50:08.195544 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.195551 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.195558 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.195565 | orchestrator | 2026-04-09 00:50:08.195573 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-09 00:50:08.195580 | orchestrator | Thursday 09 April 2026 00:46:19 +0000 (0:00:00.556) 0:00:47.776 ******** 2026-04-09 00:50:08.195587 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.195595 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.195602 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.195609 | orchestrator | 2026-04-09 00:50:08.195616 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-09 00:50:08.195624 | orchestrator | Thursday 09 April 2026 00:46:21 +0000 (0:00:02.451) 0:00:50.228 ******** 2026-04-09 00:50:08.195637 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.195645 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.195652 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.195659 | orchestrator | 2026-04-09 00:50:08.195667 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-09 00:50:08.195674 | orchestrator | Thursday 09 April 2026 00:46:24 +0000 (0:00:02.763) 0:00:52.991 ******** 2026-04-09 00:50:08.195682 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.195689 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.195696 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.195703 | orchestrator | 2026-04-09 00:50:08.195710 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-09 00:50:08.195718 | orchestrator | Thursday 09 April 2026 00:46:25 +0000 (0:00:00.647) 0:00:53.639 ******** 2026-04-09 00:50:08.195725 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 00:50:08.195733 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 00:50:08.195741 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 00:50:08.195748 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 00:50:08.195755 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 00:50:08.195762 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 00:50:08.195769 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 00:50:08.195776 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 00:50:08.195783 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 00:50:08.195790 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 00:50:08.195798 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 00:50:08.195805 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 00:50:08.195812 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 00:50:08.195824 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 00:50:08.195831 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 00:50:08.195839 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.195846 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.195853 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.195860 | orchestrator | 2026-04-09 00:50:08.195868 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-09 00:50:08.195875 | orchestrator | Thursday 09 April 2026 00:47:19 +0000 (0:00:54.202) 0:01:47.842 ******** 2026-04-09 00:50:08.195883 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.195890 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.195897 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.195908 | orchestrator | 2026-04-09 00:50:08.195915 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-09 00:50:08.195928 | orchestrator | Thursday 09 April 2026 00:47:19 +0000 (0:00:00.538) 0:01:48.380 ******** 2026-04-09 00:50:08.195936 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.195943 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.195950 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.195957 | orchestrator | 2026-04-09 00:50:08.195964 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-09 00:50:08.195971 | orchestrator | Thursday 09 April 2026 00:47:20 +0000 (0:00:00.906) 0:01:49.287 ******** 2026-04-09 00:50:08.195978 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.195984 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.195989 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.195995 | orchestrator | 2026-04-09 00:50:08.196002 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-09 00:50:08.196010 | orchestrator | Thursday 09 April 2026 00:47:22 +0000 (0:00:01.286) 0:01:50.574 ******** 2026-04-09 00:50:08.196017 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.196024 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.196031 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.196038 | orchestrator | 2026-04-09 00:50:08.196045 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-09 00:50:08.196053 | orchestrator | Thursday 09 April 2026 00:47:47 +0000 (0:00:25.283) 0:02:15.857 ******** 2026-04-09 00:50:08.196059 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.196067 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.196074 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.196081 | orchestrator | 2026-04-09 00:50:08.196088 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-09 00:50:08.196095 | orchestrator | Thursday 09 April 2026 00:47:48 +0000 (0:00:00.831) 0:02:16.689 ******** 2026-04-09 00:50:08.196102 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.196109 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.196116 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.196123 | orchestrator | 2026-04-09 00:50:08.196130 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-09 00:50:08.196137 | orchestrator | Thursday 09 April 2026 00:47:49 +0000 (0:00:00.967) 0:02:17.657 ******** 2026-04-09 00:50:08.196144 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.196151 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.196159 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.196166 | orchestrator | 2026-04-09 00:50:08.196173 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-09 00:50:08.196180 | orchestrator | Thursday 09 April 2026 00:47:49 +0000 (0:00:00.629) 0:02:18.286 ******** 2026-04-09 00:50:08.196188 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.196195 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.196202 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.196209 | orchestrator | 2026-04-09 00:50:08.196216 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-09 00:50:08.196223 | orchestrator | Thursday 09 April 2026 00:47:50 +0000 (0:00:00.760) 0:02:19.047 ******** 2026-04-09 00:50:08.196231 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.196237 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.196244 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.196252 | orchestrator | 2026-04-09 00:50:08.196260 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-09 00:50:08.196266 | orchestrator | Thursday 09 April 2026 00:47:50 +0000 (0:00:00.345) 0:02:19.392 ******** 2026-04-09 00:50:08.196274 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.196281 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.196288 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.196295 | orchestrator | 2026-04-09 00:50:08.196303 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-09 00:50:08.196315 | orchestrator | Thursday 09 April 2026 00:47:51 +0000 (0:00:00.734) 0:02:20.126 ******** 2026-04-09 00:50:08.196322 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.196329 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.196335 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.196342 | orchestrator | 2026-04-09 00:50:08.196349 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-09 00:50:08.196379 | orchestrator | Thursday 09 April 2026 00:47:52 +0000 (0:00:00.626) 0:02:20.753 ******** 2026-04-09 00:50:08.196386 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.196394 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.196400 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.196407 | orchestrator | 2026-04-09 00:50:08.196461 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-09 00:50:08.196469 | orchestrator | Thursday 09 April 2026 00:47:53 +0000 (0:00:00.882) 0:02:21.636 ******** 2026-04-09 00:50:08.196476 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:08.196483 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:08.196490 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:08.196498 | orchestrator | 2026-04-09 00:50:08.196506 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-09 00:50:08.196514 | orchestrator | Thursday 09 April 2026 00:47:54 +0000 (0:00:00.819) 0:02:22.455 ******** 2026-04-09 00:50:08.196521 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.196528 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.196535 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.196542 | orchestrator | 2026-04-09 00:50:08.196556 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-09 00:50:08.196563 | orchestrator | Thursday 09 April 2026 00:47:54 +0000 (0:00:00.404) 0:02:22.859 ******** 2026-04-09 00:50:08.196569 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.196576 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.196583 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.196590 | orchestrator | 2026-04-09 00:50:08.196598 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-09 00:50:08.196606 | orchestrator | Thursday 09 April 2026 00:47:54 +0000 (0:00:00.243) 0:02:23.103 ******** 2026-04-09 00:50:08.196612 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.196620 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.196627 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.196634 | orchestrator | 2026-04-09 00:50:08.196641 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-09 00:50:08.196648 | orchestrator | Thursday 09 April 2026 00:47:55 +0000 (0:00:00.744) 0:02:23.847 ******** 2026-04-09 00:50:08.196656 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.196673 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.196680 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.196688 | orchestrator | 2026-04-09 00:50:08.196696 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-09 00:50:08.196703 | orchestrator | Thursday 09 April 2026 00:47:56 +0000 (0:00:00.626) 0:02:24.474 ******** 2026-04-09 00:50:08.196710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 00:50:08.196717 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 00:50:08.196724 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 00:50:08.196731 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 00:50:08.196739 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 00:50:08.196746 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 00:50:08.196760 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 00:50:08.196768 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 00:50:08.196775 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 00:50:08.196783 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-09 00:50:08.196790 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 00:50:08.196797 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 00:50:08.196803 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-09 00:50:08.196810 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 00:50:08.196818 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 00:50:08.196825 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 00:50:08.196832 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 00:50:08.196840 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 00:50:08.196847 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 00:50:08.196854 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 00:50:08.196862 | orchestrator | 2026-04-09 00:50:08.196869 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-09 00:50:08.196876 | orchestrator | 2026-04-09 00:50:08.196882 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-09 00:50:08.196890 | orchestrator | Thursday 09 April 2026 00:47:59 +0000 (0:00:03.116) 0:02:27.591 ******** 2026-04-09 00:50:08.196897 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.196904 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.196912 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.196919 | orchestrator | 2026-04-09 00:50:08.196927 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-09 00:50:08.196934 | orchestrator | Thursday 09 April 2026 00:47:59 +0000 (0:00:00.361) 0:02:27.952 ******** 2026-04-09 00:50:08.196942 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.196949 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.196956 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.196963 | orchestrator | 2026-04-09 00:50:08.196970 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-09 00:50:08.196977 | orchestrator | Thursday 09 April 2026 00:48:00 +0000 (0:00:00.774) 0:02:28.727 ******** 2026-04-09 00:50:08.196983 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.196988 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.196995 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.197001 | orchestrator | 2026-04-09 00:50:08.197008 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-09 00:50:08.197016 | orchestrator | Thursday 09 April 2026 00:48:01 +0000 (0:00:00.768) 0:02:29.496 ******** 2026-04-09 00:50:08.197027 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:50:08.197035 | orchestrator | 2026-04-09 00:50:08.197042 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-09 00:50:08.197049 | orchestrator | Thursday 09 April 2026 00:48:01 +0000 (0:00:00.606) 0:02:30.102 ******** 2026-04-09 00:50:08.197056 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.197064 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.197070 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.197082 | orchestrator | 2026-04-09 00:50:08.197089 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-09 00:50:08.197096 | orchestrator | Thursday 09 April 2026 00:48:02 +0000 (0:00:00.366) 0:02:30.469 ******** 2026-04-09 00:50:08.197104 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.197111 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.197118 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.197125 | orchestrator | 2026-04-09 00:50:08.197132 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-09 00:50:08.197145 | orchestrator | Thursday 09 April 2026 00:48:02 +0000 (0:00:00.607) 0:02:31.077 ******** 2026-04-09 00:50:08.197153 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.197160 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.197167 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.197174 | orchestrator | 2026-04-09 00:50:08.197180 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-09 00:50:08.197187 | orchestrator | Thursday 09 April 2026 00:48:03 +0000 (0:00:00.443) 0:02:31.520 ******** 2026-04-09 00:50:08.197193 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.197199 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.197206 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.197213 | orchestrator | 2026-04-09 00:50:08.197221 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-09 00:50:08.197228 | orchestrator | Thursday 09 April 2026 00:48:03 +0000 (0:00:00.674) 0:02:32.195 ******** 2026-04-09 00:50:08.197234 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.197240 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.197246 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.197252 | orchestrator | 2026-04-09 00:50:08.197259 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-09 00:50:08.197265 | orchestrator | Thursday 09 April 2026 00:48:05 +0000 (0:00:01.403) 0:02:33.599 ******** 2026-04-09 00:50:08.197271 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.197277 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.197283 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.197289 | orchestrator | 2026-04-09 00:50:08.197294 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-09 00:50:08.197300 | orchestrator | Thursday 09 April 2026 00:48:06 +0000 (0:00:01.664) 0:02:35.263 ******** 2026-04-09 00:50:08.197307 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:50:08.197313 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:50:08.197320 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:50:08.197326 | orchestrator | 2026-04-09 00:50:08.197331 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-09 00:50:08.197338 | orchestrator | 2026-04-09 00:50:08.197344 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-09 00:50:08.197350 | orchestrator | Thursday 09 April 2026 00:48:17 +0000 (0:00:10.149) 0:02:45.413 ******** 2026-04-09 00:50:08.197355 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.197361 | orchestrator | 2026-04-09 00:50:08.197368 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-09 00:50:08.197376 | orchestrator | Thursday 09 April 2026 00:48:17 +0000 (0:00:00.820) 0:02:46.233 ******** 2026-04-09 00:50:08.197383 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.197389 | orchestrator | 2026-04-09 00:50:08.197396 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 00:50:08.197404 | orchestrator | Thursday 09 April 2026 00:48:18 +0000 (0:00:00.472) 0:02:46.706 ******** 2026-04-09 00:50:08.197450 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 00:50:08.197461 | orchestrator | 2026-04-09 00:50:08.197469 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 00:50:08.197476 | orchestrator | Thursday 09 April 2026 00:48:18 +0000 (0:00:00.545) 0:02:47.252 ******** 2026-04-09 00:50:08.197484 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.197499 | orchestrator | 2026-04-09 00:50:08.197508 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-09 00:50:08.197515 | orchestrator | Thursday 09 April 2026 00:48:19 +0000 (0:00:01.026) 0:02:48.279 ******** 2026-04-09 00:50:08.197523 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.197530 | orchestrator | 2026-04-09 00:50:08.197538 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-09 00:50:08.197546 | orchestrator | Thursday 09 April 2026 00:48:20 +0000 (0:00:00.669) 0:02:48.948 ******** 2026-04-09 00:50:08.197554 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:50:08.197562 | orchestrator | 2026-04-09 00:50:08.197570 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-09 00:50:08.197578 | orchestrator | Thursday 09 April 2026 00:48:22 +0000 (0:00:01.730) 0:02:50.679 ******** 2026-04-09 00:50:08.197585 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:50:08.197593 | orchestrator | 2026-04-09 00:50:08.197601 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-09 00:50:08.197609 | orchestrator | Thursday 09 April 2026 00:48:23 +0000 (0:00:00.784) 0:02:51.464 ******** 2026-04-09 00:50:08.197617 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.197624 | orchestrator | 2026-04-09 00:50:08.197632 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-09 00:50:08.197640 | orchestrator | Thursday 09 April 2026 00:48:23 +0000 (0:00:00.299) 0:02:51.763 ******** 2026-04-09 00:50:08.197647 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.197654 | orchestrator | 2026-04-09 00:50:08.197662 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-09 00:50:08.197669 | orchestrator | 2026-04-09 00:50:08.197676 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-09 00:50:08.197689 | orchestrator | Thursday 09 April 2026 00:48:23 +0000 (0:00:00.356) 0:02:52.120 ******** 2026-04-09 00:50:08.197698 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.197705 | orchestrator | 2026-04-09 00:50:08.197712 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-09 00:50:08.197720 | orchestrator | Thursday 09 April 2026 00:48:23 +0000 (0:00:00.098) 0:02:52.219 ******** 2026-04-09 00:50:08.197727 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:50:08.197734 | orchestrator | 2026-04-09 00:50:08.197742 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-09 00:50:08.197749 | orchestrator | Thursday 09 April 2026 00:48:23 +0000 (0:00:00.172) 0:02:52.392 ******** 2026-04-09 00:50:08.197757 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.197764 | orchestrator | 2026-04-09 00:50:08.197772 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-09 00:50:08.197780 | orchestrator | Thursday 09 April 2026 00:48:25 +0000 (0:00:01.102) 0:02:53.494 ******** 2026-04-09 00:50:08.197796 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.197805 | orchestrator | 2026-04-09 00:50:08.197812 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-09 00:50:08.197819 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:01.416) 0:02:54.910 ******** 2026-04-09 00:50:08.197827 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.197834 | orchestrator | 2026-04-09 00:50:08.197841 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-09 00:50:08.197847 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:00.681) 0:02:55.592 ******** 2026-04-09 00:50:08.197855 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.197863 | orchestrator | 2026-04-09 00:50:08.197870 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-09 00:50:08.197878 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:00.392) 0:02:55.984 ******** 2026-04-09 00:50:08.197886 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.197894 | orchestrator | 2026-04-09 00:50:08.197902 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-09 00:50:08.197918 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:06.737) 0:03:02.721 ******** 2026-04-09 00:50:08.197926 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.197933 | orchestrator | 2026-04-09 00:50:08.197940 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-09 00:50:08.197948 | orchestrator | Thursday 09 April 2026 00:48:45 +0000 (0:00:11.620) 0:03:14.341 ******** 2026-04-09 00:50:08.197955 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.197962 | orchestrator | 2026-04-09 00:50:08.197969 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-09 00:50:08.197976 | orchestrator | 2026-04-09 00:50:08.197982 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-09 00:50:08.197988 | orchestrator | Thursday 09 April 2026 00:48:46 +0000 (0:00:00.435) 0:03:14.776 ******** 2026-04-09 00:50:08.197994 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.198000 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.198007 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.198050 | orchestrator | 2026-04-09 00:50:08.198059 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-09 00:50:08.198067 | orchestrator | Thursday 09 April 2026 00:48:46 +0000 (0:00:00.376) 0:03:15.153 ******** 2026-04-09 00:50:08.198074 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.198082 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.198089 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.198096 | orchestrator | 2026-04-09 00:50:08.198104 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-09 00:50:08.198112 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.263) 0:03:15.416 ******** 2026-04-09 00:50:08.198120 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-09 00:50:08.198128 | orchestrator | 2026-04-09 00:50:08.198135 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-09 00:50:08.198143 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.546) 0:03:15.963 ******** 2026-04-09 00:50:08.198150 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.198158 | orchestrator | 2026-04-09 00:50:08.198165 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-09 00:50:08.198173 | orchestrator | Thursday 09 April 2026 00:48:48 +0000 (0:00:00.682) 0:03:16.645 ******** 2026-04-09 00:50:08.198181 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.198188 | orchestrator | 2026-04-09 00:50:08.198196 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-09 00:50:08.198203 | orchestrator | Thursday 09 April 2026 00:48:49 +0000 (0:00:00.874) 0:03:17.520 ******** 2026-04-09 00:50:08.198211 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.198218 | orchestrator | 2026-04-09 00:50:08.198225 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-09 00:50:08.198234 | orchestrator | Thursday 09 April 2026 00:48:49 +0000 (0:00:00.207) 0:03:17.727 ******** 2026-04-09 00:50:08.198242 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.198249 | orchestrator | 2026-04-09 00:50:08.198257 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-09 00:50:08.198265 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.769) 0:03:18.496 ******** 2026-04-09 00:50:08.198272 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.198280 | orchestrator | 2026-04-09 00:50:08.198287 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-09 00:50:08.198295 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.138) 0:03:18.635 ******** 2026-04-09 00:50:08.198303 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.198310 | orchestrator | 2026-04-09 00:50:08.198318 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-09 00:50:08.198332 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.133) 0:03:18.768 ******** 2026-04-09 00:50:08.198341 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.198348 | orchestrator | 2026-04-09 00:50:08.198361 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-09 00:50:08.198368 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.138) 0:03:18.907 ******** 2026-04-09 00:50:08.198377 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.198384 | orchestrator | 2026-04-09 00:50:08.198392 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-09 00:50:08.198399 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.120) 0:03:19.028 ******** 2026-04-09 00:50:08.198407 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.198433 | orchestrator | 2026-04-09 00:50:08.198440 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-09 00:50:08.198448 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:04.303) 0:03:23.331 ******** 2026-04-09 00:50:08.198455 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-09 00:50:08.198473 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-09 00:50:08.198482 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-09 00:50:08.198489 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-09 00:50:08.198496 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-09 00:50:08.198503 | orchestrator | 2026-04-09 00:50:08.198510 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-09 00:50:08.198518 | orchestrator | Thursday 09 April 2026 00:49:39 +0000 (0:00:44.294) 0:04:07.625 ******** 2026-04-09 00:50:08.198525 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.198532 | orchestrator | 2026-04-09 00:50:08.198540 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-09 00:50:08.198547 | orchestrator | Thursday 09 April 2026 00:49:40 +0000 (0:00:01.051) 0:04:08.676 ******** 2026-04-09 00:50:08.198554 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.198561 | orchestrator | 2026-04-09 00:50:08.198568 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-09 00:50:08.198575 | orchestrator | Thursday 09 April 2026 00:49:41 +0000 (0:00:01.325) 0:04:10.002 ******** 2026-04-09 00:50:08.198582 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:50:08.198589 | orchestrator | 2026-04-09 00:50:08.198597 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-09 00:50:08.198604 | orchestrator | Thursday 09 April 2026 00:49:42 +0000 (0:00:00.996) 0:04:10.998 ******** 2026-04-09 00:50:08.198611 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.198618 | orchestrator | 2026-04-09 00:50:08.198625 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-09 00:50:08.198632 | orchestrator | Thursday 09 April 2026 00:49:42 +0000 (0:00:00.118) 0:04:11.117 ******** 2026-04-09 00:50:08.198639 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-09 00:50:08.198647 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-09 00:50:08.198654 | orchestrator | 2026-04-09 00:50:08.198661 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-09 00:50:08.198668 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:01.770) 0:04:12.888 ******** 2026-04-09 00:50:08.198675 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.198683 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.198690 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.198697 | orchestrator | 2026-04-09 00:50:08.198704 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-09 00:50:08.198711 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:00.287) 0:04:13.175 ******** 2026-04-09 00:50:08.198724 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.198730 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.198738 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.198745 | orchestrator | 2026-04-09 00:50:08.198752 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-09 00:50:08.198759 | orchestrator | 2026-04-09 00:50:08.198767 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-09 00:50:08.198774 | orchestrator | Thursday 09 April 2026 00:49:45 +0000 (0:00:00.900) 0:04:14.075 ******** 2026-04-09 00:50:08.198781 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:08.198788 | orchestrator | 2026-04-09 00:50:08.198796 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-09 00:50:08.198803 | orchestrator | Thursday 09 April 2026 00:49:45 +0000 (0:00:00.258) 0:04:14.334 ******** 2026-04-09 00:50:08.198811 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:50:08.198819 | orchestrator | 2026-04-09 00:50:08.198826 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-09 00:50:08.198833 | orchestrator | Thursday 09 April 2026 00:49:46 +0000 (0:00:00.428) 0:04:14.763 ******** 2026-04-09 00:50:08.198840 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:08.198848 | orchestrator | 2026-04-09 00:50:08.198855 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-09 00:50:08.198862 | orchestrator | 2026-04-09 00:50:08.198869 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-09 00:50:08.198877 | orchestrator | Thursday 09 April 2026 00:49:51 +0000 (0:00:04.967) 0:04:19.730 ******** 2026-04-09 00:50:08.198884 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:50:08.198891 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:50:08.198899 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:50:08.198906 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:08.198913 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:08.198920 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:08.198927 | orchestrator | 2026-04-09 00:50:08.198935 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-09 00:50:08.198942 | orchestrator | Thursday 09 April 2026 00:49:52 +0000 (0:00:00.871) 0:04:20.602 ******** 2026-04-09 00:50:08.198959 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 00:50:08.198967 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 00:50:08.198974 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 00:50:08.198980 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 00:50:08.198986 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 00:50:08.198992 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 00:50:08.198998 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 00:50:08.199005 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 00:50:08.199019 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 00:50:08.199026 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 00:50:08.199034 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 00:50:08.199041 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 00:50:08.199048 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 00:50:08.199055 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 00:50:08.199062 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 00:50:08.199074 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 00:50:08.199081 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 00:50:08.199087 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 00:50:08.199093 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 00:50:08.199100 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 00:50:08.199108 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 00:50:08.199114 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 00:50:08.199122 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 00:50:08.199129 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 00:50:08.199136 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 00:50:08.199144 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 00:50:08.199152 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 00:50:08.199160 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 00:50:08.199167 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 00:50:08.199173 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 00:50:08.199181 | orchestrator | 2026-04-09 00:50:08.199188 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-09 00:50:08.199195 | orchestrator | Thursday 09 April 2026 00:50:05 +0000 (0:00:13.615) 0:04:34.218 ******** 2026-04-09 00:50:08.199202 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.199208 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.199215 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.199222 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.199229 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.199235 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.199242 | orchestrator | 2026-04-09 00:50:08.199249 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-09 00:50:08.199255 | orchestrator | Thursday 09 April 2026 00:50:06 +0000 (0:00:00.691) 0:04:34.910 ******** 2026-04-09 00:50:08.199262 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:50:08.199269 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:50:08.199275 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:50:08.199283 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:08.199289 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:08.199295 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:08.199302 | orchestrator | 2026-04-09 00:50:08.199308 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:50:08.199315 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:50:08.199324 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-09 00:50:08.199332 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 00:50:08.199339 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 00:50:08.199355 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 00:50:08.199929 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 00:50:08.199969 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 00:50:08.199977 | orchestrator | 2026-04-09 00:50:08.199983 | orchestrator | 2026-04-09 00:50:08.199989 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:50:08.200006 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:00.548) 0:04:35.459 ******** 2026-04-09 00:50:08.200014 | orchestrator | =============================================================================== 2026-04-09 00:50:08.200021 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.20s 2026-04-09 00:50:08.200028 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.29s 2026-04-09 00:50:08.200035 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.28s 2026-04-09 00:50:08.200043 | orchestrator | Manage labels ---------------------------------------------------------- 13.62s 2026-04-09 00:50:08.200050 | orchestrator | kubectl : Install required packages ------------------------------------ 11.62s 2026-04-09 00:50:08.200056 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.15s 2026-04-09 00:50:08.200063 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.74s 2026-04-09 00:50:08.200070 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.89s 2026-04-09 00:50:08.200081 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.97s 2026-04-09 00:50:08.200088 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.30s 2026-04-09 00:50:08.200094 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.12s 2026-04-09 00:50:08.200101 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.76s 2026-04-09 00:50:08.200108 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.62s 2026-04-09 00:50:08.200115 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.45s 2026-04-09 00:50:08.200123 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.27s 2026-04-09 00:50:08.200129 | orchestrator | k3s_server : Clean previous runs of k3s-init ---------------------------- 2.17s 2026-04-09 00:50:08.200136 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.14s 2026-04-09 00:50:08.200143 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.09s 2026-04-09 00:50:08.200150 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.08s 2026-04-09 00:50:08.200156 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.07s 2026-04-09 00:50:08.200163 | orchestrator | 2026-04-09 00:50:08 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:08.204793 | orchestrator | 2026-04-09 00:50:08 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:08.204861 | orchestrator | 2026-04-09 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:11.232313 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:11.232401 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task b7065e30-0b97-4fbc-add5-6cf9bbc59034 is in state STARTED 2026-04-09 00:50:11.232925 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task a4a25de2-741f-4db3-b707-2c101f8e1f36 is in state STARTED 2026-04-09 00:50:11.233466 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:11.234234 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:11.235553 | orchestrator | 2026-04-09 00:50:11 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:11.235593 | orchestrator | 2026-04-09 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:14.439067 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:14.439140 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task b7065e30-0b97-4fbc-add5-6cf9bbc59034 is in state SUCCESS 2026-04-09 00:50:14.439146 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task a4a25de2-741f-4db3-b707-2c101f8e1f36 is in state STARTED 2026-04-09 00:50:14.439151 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:14.445965 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:14.448732 | orchestrator | 2026-04-09 00:50:14 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:14.448800 | orchestrator | 2026-04-09 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:17.528169 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:17.529235 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task a4a25de2-741f-4db3-b707-2c101f8e1f36 is in state STARTED 2026-04-09 00:50:17.531744 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:17.532933 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:17.535000 | orchestrator | 2026-04-09 00:50:17 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:17.535046 | orchestrator | 2026-04-09 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:20.567833 | orchestrator | 2026-04-09 00:50:20 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:20.568187 | orchestrator | 2026-04-09 00:50:20 | INFO  | Task a4a25de2-741f-4db3-b707-2c101f8e1f36 is in state SUCCESS 2026-04-09 00:50:20.569340 | orchestrator | 2026-04-09 00:50:20 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:20.570316 | orchestrator | 2026-04-09 00:50:20 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:20.572169 | orchestrator | 2026-04-09 00:50:20 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:20.572204 | orchestrator | 2026-04-09 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:23.605999 | orchestrator | 2026-04-09 00:50:23 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:23.607279 | orchestrator | 2026-04-09 00:50:23 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:23.608691 | orchestrator | 2026-04-09 00:50:23 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:23.609844 | orchestrator | 2026-04-09 00:50:23 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:23.609915 | orchestrator | 2026-04-09 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:26.651728 | orchestrator | 2026-04-09 00:50:26 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:26.653731 | orchestrator | 2026-04-09 00:50:26 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:26.654431 | orchestrator | 2026-04-09 00:50:26 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:26.655186 | orchestrator | 2026-04-09 00:50:26 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:26.655212 | orchestrator | 2026-04-09 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:29.695792 | orchestrator | 2026-04-09 00:50:29 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:29.696962 | orchestrator | 2026-04-09 00:50:29 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:29.697761 | orchestrator | 2026-04-09 00:50:29 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:29.700475 | orchestrator | 2026-04-09 00:50:29 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:29.700525 | orchestrator | 2026-04-09 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:32.737013 | orchestrator | 2026-04-09 00:50:32 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:32.737223 | orchestrator | 2026-04-09 00:50:32 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:32.738267 | orchestrator | 2026-04-09 00:50:32 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:32.739403 | orchestrator | 2026-04-09 00:50:32 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:32.739452 | orchestrator | 2026-04-09 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:35.782587 | orchestrator | 2026-04-09 00:50:35 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:35.782668 | orchestrator | 2026-04-09 00:50:35 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:35.783433 | orchestrator | 2026-04-09 00:50:35 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:35.784418 | orchestrator | 2026-04-09 00:50:35 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:35.784492 | orchestrator | 2026-04-09 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:38.809456 | orchestrator | 2026-04-09 00:50:38 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:38.809862 | orchestrator | 2026-04-09 00:50:38 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:38.810541 | orchestrator | 2026-04-09 00:50:38 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:38.811282 | orchestrator | 2026-04-09 00:50:38 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state STARTED 2026-04-09 00:50:38.811310 | orchestrator | 2026-04-09 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:41.862896 | orchestrator | 2026-04-09 00:50:41 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:41.865709 | orchestrator | 2026-04-09 00:50:41 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:41.866671 | orchestrator | 2026-04-09 00:50:41 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:41.869146 | orchestrator | 2026-04-09 00:50:41 | INFO  | Task 1c7d9292-1c6d-4e82-a9c3-8406b84f0dd2 is in state SUCCESS 2026-04-09 00:50:41.869205 | orchestrator | 2026-04-09 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:41.870901 | orchestrator | 2026-04-09 00:50:41.870968 | orchestrator | 2026-04-09 00:50:41.870983 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-09 00:50:41.870999 | orchestrator | 2026-04-09 00:50:41.871015 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 00:50:41.871030 | orchestrator | Thursday 09 April 2026 00:50:10 +0000 (0:00:00.202) 0:00:00.202 ******** 2026-04-09 00:50:41.871045 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 00:50:41.871060 | orchestrator | 2026-04-09 00:50:41.871076 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 00:50:41.871090 | orchestrator | Thursday 09 April 2026 00:50:11 +0000 (0:00:01.042) 0:00:01.245 ******** 2026-04-09 00:50:41.871105 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:41.871120 | orchestrator | 2026-04-09 00:50:41.871135 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-09 00:50:41.871150 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:01.392) 0:00:02.637 ******** 2026-04-09 00:50:41.871165 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:41.871180 | orchestrator | 2026-04-09 00:50:41.871195 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:50:41.871209 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:50:41.871226 | orchestrator | 2026-04-09 00:50:41.871239 | orchestrator | 2026-04-09 00:50:41.871252 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:50:41.871267 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:00.420) 0:00:03.058 ******** 2026-04-09 00:50:41.871283 | orchestrator | =============================================================================== 2026-04-09 00:50:41.871298 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.39s 2026-04-09 00:50:41.871314 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.04s 2026-04-09 00:50:41.871407 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2026-04-09 00:50:41.871425 | orchestrator | 2026-04-09 00:50:41.871440 | orchestrator | 2026-04-09 00:50:41.871449 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-09 00:50:41.871458 | orchestrator | 2026-04-09 00:50:41.871467 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-09 00:50:41.871475 | orchestrator | Thursday 09 April 2026 00:50:10 +0000 (0:00:00.234) 0:00:00.234 ******** 2026-04-09 00:50:41.871484 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:41.871494 | orchestrator | 2026-04-09 00:50:41.871503 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-09 00:50:41.871512 | orchestrator | Thursday 09 April 2026 00:50:11 +0000 (0:00:00.775) 0:00:01.009 ******** 2026-04-09 00:50:41.871521 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:41.871534 | orchestrator | 2026-04-09 00:50:41.871553 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 00:50:41.871575 | orchestrator | Thursday 09 April 2026 00:50:11 +0000 (0:00:00.538) 0:00:01.548 ******** 2026-04-09 00:50:41.871589 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 00:50:41.871603 | orchestrator | 2026-04-09 00:50:41.871616 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 00:50:41.871630 | orchestrator | Thursday 09 April 2026 00:50:12 +0000 (0:00:01.121) 0:00:02.670 ******** 2026-04-09 00:50:41.871644 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:41.871659 | orchestrator | 2026-04-09 00:50:41.871674 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-09 00:50:41.871690 | orchestrator | Thursday 09 April 2026 00:50:14 +0000 (0:00:01.139) 0:00:03.809 ******** 2026-04-09 00:50:41.871704 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:41.871741 | orchestrator | 2026-04-09 00:50:41.871751 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-09 00:50:41.871759 | orchestrator | Thursday 09 April 2026 00:50:14 +0000 (0:00:00.513) 0:00:04.323 ******** 2026-04-09 00:50:41.871768 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:50:41.871777 | orchestrator | 2026-04-09 00:50:41.871785 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-09 00:50:41.871794 | orchestrator | Thursday 09 April 2026 00:50:16 +0000 (0:00:01.793) 0:00:06.116 ******** 2026-04-09 00:50:41.871802 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:50:41.871811 | orchestrator | 2026-04-09 00:50:41.871819 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-09 00:50:41.871828 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.694) 0:00:06.810 ******** 2026-04-09 00:50:41.871836 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:41.871845 | orchestrator | 2026-04-09 00:50:41.871853 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-09 00:50:41.871862 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.342) 0:00:07.153 ******** 2026-04-09 00:50:41.871870 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:41.871879 | orchestrator | 2026-04-09 00:50:41.871888 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:50:41.871896 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:50:41.871906 | orchestrator | 2026-04-09 00:50:41.871914 | orchestrator | 2026-04-09 00:50:41.871923 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:50:41.871931 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.310) 0:00:07.463 ******** 2026-04-09 00:50:41.871941 | orchestrator | =============================================================================== 2026-04-09 00:50:41.871955 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.79s 2026-04-09 00:50:41.871985 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.14s 2026-04-09 00:50:41.872000 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.12s 2026-04-09 00:50:41.872033 | orchestrator | Get home directory of operator user ------------------------------------- 0.78s 2026-04-09 00:50:41.872047 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.69s 2026-04-09 00:50:41.872061 | orchestrator | Create .kube directory -------------------------------------------------- 0.54s 2026-04-09 00:50:41.872074 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.51s 2026-04-09 00:50:41.872086 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.34s 2026-04-09 00:50:41.872098 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2026-04-09 00:50:41.872113 | orchestrator | 2026-04-09 00:50:41.872127 | orchestrator | 2026-04-09 00:50:41.872142 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-09 00:50:41.872157 | orchestrator | 2026-04-09 00:50:41.872172 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-09 00:50:41.872188 | orchestrator | Thursday 09 April 2026 00:48:24 +0000 (0:00:00.169) 0:00:00.169 ******** 2026-04-09 00:50:41.872203 | orchestrator | ok: [localhost] => { 2026-04-09 00:50:41.872218 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-09 00:50:41.872234 | orchestrator | } 2026-04-09 00:50:41.872248 | orchestrator | 2026-04-09 00:50:41.872264 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-09 00:50:41.872276 | orchestrator | Thursday 09 April 2026 00:48:24 +0000 (0:00:00.124) 0:00:00.294 ******** 2026-04-09 00:50:41.872290 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-09 00:50:41.872316 | orchestrator | ...ignoring 2026-04-09 00:50:41.872359 | orchestrator | 2026-04-09 00:50:41.872373 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-09 00:50:41.872387 | orchestrator | Thursday 09 April 2026 00:48:28 +0000 (0:00:03.550) 0:00:03.844 ******** 2026-04-09 00:50:41.872401 | orchestrator | skipping: [localhost] 2026-04-09 00:50:41.872413 | orchestrator | 2026-04-09 00:50:41.872426 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-09 00:50:41.872441 | orchestrator | Thursday 09 April 2026 00:48:28 +0000 (0:00:00.128) 0:00:03.972 ******** 2026-04-09 00:50:41.872456 | orchestrator | ok: [localhost] 2026-04-09 00:50:41.872472 | orchestrator | 2026-04-09 00:50:41.872485 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:50:41.872501 | orchestrator | 2026-04-09 00:50:41.872512 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:50:41.872521 | orchestrator | Thursday 09 April 2026 00:48:28 +0000 (0:00:00.229) 0:00:04.202 ******** 2026-04-09 00:50:41.872529 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:41.872538 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:41.872546 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:41.872555 | orchestrator | 2026-04-09 00:50:41.872563 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:50:41.872572 | orchestrator | Thursday 09 April 2026 00:48:29 +0000 (0:00:00.321) 0:00:04.523 ******** 2026-04-09 00:50:41.872581 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-09 00:50:41.872590 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-09 00:50:41.872598 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-09 00:50:41.872607 | orchestrator | 2026-04-09 00:50:41.872616 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-09 00:50:41.872624 | orchestrator | 2026-04-09 00:50:41.872633 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 00:50:41.872642 | orchestrator | Thursday 09 April 2026 00:48:29 +0000 (0:00:00.419) 0:00:04.943 ******** 2026-04-09 00:50:41.872651 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:41.872660 | orchestrator | 2026-04-09 00:50:41.872669 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-09 00:50:41.872677 | orchestrator | Thursday 09 April 2026 00:48:30 +0000 (0:00:00.547) 0:00:05.491 ******** 2026-04-09 00:50:41.872686 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:41.872694 | orchestrator | 2026-04-09 00:50:41.872703 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-09 00:50:41.872711 | orchestrator | Thursday 09 April 2026 00:48:31 +0000 (0:00:01.407) 0:00:06.898 ******** 2026-04-09 00:50:41.872720 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:41.872729 | orchestrator | 2026-04-09 00:50:41.872737 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-09 00:50:41.872746 | orchestrator | Thursday 09 April 2026 00:48:31 +0000 (0:00:00.341) 0:00:07.240 ******** 2026-04-09 00:50:41.872754 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:41.872763 | orchestrator | 2026-04-09 00:50:41.872772 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-09 00:50:41.872780 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:00:00.345) 0:00:07.585 ******** 2026-04-09 00:50:41.872789 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:41.872798 | orchestrator | 2026-04-09 00:50:41.872806 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-09 00:50:41.872815 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:00:00.437) 0:00:08.023 ******** 2026-04-09 00:50:41.872823 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:41.872832 | orchestrator | 2026-04-09 00:50:41.872841 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 00:50:41.872849 | orchestrator | Thursday 09 April 2026 00:48:33 +0000 (0:00:00.497) 0:00:08.520 ******** 2026-04-09 00:50:41.872875 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:41.872884 | orchestrator | 2026-04-09 00:50:41.872893 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-09 00:50:41.872913 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:00.907) 0:00:09.428 ******** 2026-04-09 00:50:41.872922 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:41.872931 | orchestrator | 2026-04-09 00:50:41.872939 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-09 00:50:41.872948 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:00.757) 0:00:10.186 ******** 2026-04-09 00:50:41.872957 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:41.872965 | orchestrator | 2026-04-09 00:50:41.872974 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-09 00:50:41.872983 | orchestrator | Thursday 09 April 2026 00:48:35 +0000 (0:00:00.702) 0:00:10.889 ******** 2026-04-09 00:50:41.872992 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:41.873000 | orchestrator | 2026-04-09 00:50:41.873009 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-09 00:50:41.873017 | orchestrator | Thursday 09 April 2026 00:48:35 +0000 (0:00:00.375) 0:00:11.264 ******** 2026-04-09 00:50:41.873032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:41.873046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:41.873057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:41.873073 | orchestrator | 2026-04-09 00:50:41.873081 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-09 00:50:41.873102 | orchestrator | Thursday 09 April 2026 00:48:37 +0000 (0:00:01.764) 0:00:13.029 ******** 2026-04-09 00:50:41.873120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:41.873131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:41.873140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:41.873149 | orchestrator | 2026-04-09 00:50:41.873164 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-09 00:50:41.873173 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:01.524) 0:00:14.554 ******** 2026-04-09 00:50:41.873182 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 00:50:41.873191 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 00:50:41.873200 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 00:50:41.873208 | orchestrator | 2026-04-09 00:50:41.873217 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-09 00:50:41.873226 | orchestrator | Thursday 09 April 2026 00:48:41 +0000 (0:00:02.011) 0:00:16.565 ******** 2026-04-09 00:50:41.873234 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 00:50:41.873243 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 00:50:41.873256 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 00:50:41.873264 | orchestrator | 2026-04-09 00:50:41.873273 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-09 00:50:41.873287 | orchestrator | Thursday 09 April 2026 00:48:45 +0000 (0:00:04.098) 0:00:20.664 ******** 2026-04-09 00:50:41.873296 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 00:50:41.873305 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 00:50:41.873313 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 00:50:41.873322 | orchestrator | 2026-04-09 00:50:41.873353 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-09 00:50:41.873362 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:01.687) 0:00:22.351 ******** 2026-04-09 00:50:41.873370 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 00:50:41.873379 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 00:50:41.873388 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 00:50:41.873396 | orchestrator | 2026-04-09 00:50:41.873405 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-09 00:50:41.873416 | orchestrator | Thursday 09 April 2026 00:48:48 +0000 (0:00:01.565) 0:00:23.916 ******** 2026-04-09 00:50:41.873430 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 00:50:41.873445 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 00:50:41.873467 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 00:50:41.873482 | orchestrator | 2026-04-09 00:50:41.873496 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-09 00:50:41.873509 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:01.544) 0:00:25.461 ******** 2026-04-09 00:50:41.873523 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 00:50:41.873537 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 00:50:41.873551 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 00:50:41.873565 | orchestrator | 2026-04-09 00:50:41.873580 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 00:50:41.873595 | orchestrator | Thursday 09 April 2026 00:48:52 +0000 (0:00:01.968) 0:00:27.429 ******** 2026-04-09 00:50:41.873609 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:41.873635 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:41.873648 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:41.873657 | orchestrator | 2026-04-09 00:50:41.873666 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-09 00:50:41.873675 | orchestrator | Thursday 09 April 2026 00:48:52 +0000 (0:00:00.411) 0:00:27.841 ******** 2026-04-09 00:50:41.873685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:41.873710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:41.873721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:41.873731 | orchestrator | 2026-04-09 00:50:41.873740 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-09 00:50:41.873749 | orchestrator | Thursday 09 April 2026 00:48:53 +0000 (0:00:01.208) 0:00:29.049 ******** 2026-04-09 00:50:41.873758 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:41.873766 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:41.873775 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:41.873790 | orchestrator | 2026-04-09 00:50:41.873799 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-09 00:50:41.873808 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:00.958) 0:00:30.008 ******** 2026-04-09 00:50:41.873818 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:41.873827 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:41.873836 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:41.873845 | orchestrator | 2026-04-09 00:50:41.873854 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-09 00:50:41.873863 | orchestrator | Thursday 09 April 2026 00:49:01 +0000 (0:00:07.064) 0:00:37.072 ******** 2026-04-09 00:50:41.873872 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:41.873881 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:41.873890 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:41.873899 | orchestrator | 2026-04-09 00:50:41.873908 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 00:50:41.873917 | orchestrator | 2026-04-09 00:50:41.873926 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 00:50:41.873936 | orchestrator | Thursday 09 April 2026 00:49:02 +0000 (0:00:00.409) 0:00:37.482 ******** 2026-04-09 00:50:41.873951 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:41.873965 | orchestrator | 2026-04-09 00:50:41.873979 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 00:50:41.873994 | orchestrator | Thursday 09 April 2026 00:49:02 +0000 (0:00:00.552) 0:00:38.034 ******** 2026-04-09 00:50:41.874008 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:41.874118 | orchestrator | 2026-04-09 00:50:41.874129 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 00:50:41.874138 | orchestrator | Thursday 09 April 2026 00:49:02 +0000 (0:00:00.189) 0:00:38.224 ******** 2026-04-09 00:50:41.874146 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:41.874155 | orchestrator | 2026-04-09 00:50:41.874164 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 00:50:41.874174 | orchestrator | Thursday 09 April 2026 00:49:09 +0000 (0:00:07.025) 0:00:45.250 ******** 2026-04-09 00:50:41.874182 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:41.874190 | orchestrator | 2026-04-09 00:50:41.874200 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 00:50:41.874208 | orchestrator | 2026-04-09 00:50:41.874217 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 00:50:41.874226 | orchestrator | Thursday 09 April 2026 00:49:59 +0000 (0:00:49.599) 0:01:34.850 ******** 2026-04-09 00:50:41.874235 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:41.874244 | orchestrator | 2026-04-09 00:50:41.874252 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 00:50:41.874261 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:00.816) 0:01:35.666 ******** 2026-04-09 00:50:41.874270 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:41.874278 | orchestrator | 2026-04-09 00:50:41.874287 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 00:50:41.874296 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:00.276) 0:01:35.942 ******** 2026-04-09 00:50:41.874305 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:41.874313 | orchestrator | 2026-04-09 00:50:41.874322 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 00:50:41.874361 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:06.615) 0:01:42.558 ******** 2026-04-09 00:50:41.874371 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:41.874380 | orchestrator | 2026-04-09 00:50:41.874389 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-09 00:50:41.874398 | orchestrator | 2026-04-09 00:50:41.874414 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-09 00:50:41.874424 | orchestrator | Thursday 09 April 2026 00:50:18 +0000 (0:00:11.032) 0:01:53.590 ******** 2026-04-09 00:50:41.874433 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:41.874449 | orchestrator | 2026-04-09 00:50:41.874469 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-09 00:50:41.874485 | orchestrator | Thursday 09 April 2026 00:50:18 +0000 (0:00:00.751) 0:01:54.342 ******** 2026-04-09 00:50:41.874506 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:41.874524 | orchestrator | 2026-04-09 00:50:41.874539 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-09 00:50:41.874552 | orchestrator | Thursday 09 April 2026 00:50:19 +0000 (0:00:00.239) 0:01:54.582 ******** 2026-04-09 00:50:41.874567 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:41.874581 | orchestrator | 2026-04-09 00:50:41.874594 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-09 00:50:41.874608 | orchestrator | Thursday 09 April 2026 00:50:26 +0000 (0:00:07.301) 0:02:01.883 ******** 2026-04-09 00:50:41.874623 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:41.874637 | orchestrator | 2026-04-09 00:50:41.874649 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-09 00:50:41.874663 | orchestrator | 2026-04-09 00:50:41.874677 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-09 00:50:41.874692 | orchestrator | Thursday 09 April 2026 00:50:38 +0000 (0:00:11.598) 0:02:13.482 ******** 2026-04-09 00:50:41.874706 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:41.874720 | orchestrator | 2026-04-09 00:50:41.874736 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-09 00:50:41.874751 | orchestrator | Thursday 09 April 2026 00:50:38 +0000 (0:00:00.575) 0:02:14.058 ******** 2026-04-09 00:50:41.874767 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:41.874782 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:41.874797 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:41.874811 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-09 00:50:41.874827 | orchestrator | enable_outward_rabbitmq_True 2026-04-09 00:50:41.874843 | orchestrator | 2026-04-09 00:50:41.874857 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-09 00:50:41.874872 | orchestrator | skipping: no hosts matched 2026-04-09 00:50:41.874882 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-09 00:50:41.874891 | orchestrator | outward_rabbitmq_restart 2026-04-09 00:50:41.874899 | orchestrator | 2026-04-09 00:50:41.874908 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-09 00:50:41.874917 | orchestrator | skipping: no hosts matched 2026-04-09 00:50:41.874926 | orchestrator | 2026-04-09 00:50:41.874935 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-09 00:50:41.874944 | orchestrator | skipping: no hosts matched 2026-04-09 00:50:41.874952 | orchestrator | 2026-04-09 00:50:41.874961 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:50:41.874971 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:50:41.874982 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 00:50:41.874991 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:50:41.875003 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:50:41.875018 | orchestrator | 2026-04-09 00:50:41.875040 | orchestrator | 2026-04-09 00:50:41.875056 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:50:41.875071 | orchestrator | Thursday 09 April 2026 00:50:41 +0000 (0:00:02.517) 0:02:16.576 ******** 2026-04-09 00:50:41.875085 | orchestrator | =============================================================================== 2026-04-09 00:50:41.875113 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 72.23s 2026-04-09 00:50:41.875127 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 20.94s 2026-04-09 00:50:41.875140 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.07s 2026-04-09 00:50:41.875155 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.10s 2026-04-09 00:50:41.875169 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.55s 2026-04-09 00:50:41.875183 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.52s 2026-04-09 00:50:41.875199 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.12s 2026-04-09 00:50:41.875214 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.01s 2026-04-09 00:50:41.875230 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.97s 2026-04-09 00:50:41.875244 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.76s 2026-04-09 00:50:41.875258 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.69s 2026-04-09 00:50:41.875267 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.57s 2026-04-09 00:50:41.875276 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.54s 2026-04-09 00:50:41.875284 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.52s 2026-04-09 00:50:41.875293 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.41s 2026-04-09 00:50:41.875309 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.21s 2026-04-09 00:50:41.875319 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.96s 2026-04-09 00:50:41.875377 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.91s 2026-04-09 00:50:41.875388 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.76s 2026-04-09 00:50:41.875397 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.71s 2026-04-09 00:50:44.901951 | orchestrator | 2026-04-09 00:50:44 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:44.902863 | orchestrator | 2026-04-09 00:50:44 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:44.904115 | orchestrator | 2026-04-09 00:50:44 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:44.904153 | orchestrator | 2026-04-09 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:47.936889 | orchestrator | 2026-04-09 00:50:47 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:47.936961 | orchestrator | 2026-04-09 00:50:47 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:47.937763 | orchestrator | 2026-04-09 00:50:47 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:47.937815 | orchestrator | 2026-04-09 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:50.974753 | orchestrator | 2026-04-09 00:50:50 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:50.976150 | orchestrator | 2026-04-09 00:50:50 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:50.977499 | orchestrator | 2026-04-09 00:50:50 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:50.977810 | orchestrator | 2026-04-09 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:54.022562 | orchestrator | 2026-04-09 00:50:54 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:54.023959 | orchestrator | 2026-04-09 00:50:54 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:54.026568 | orchestrator | 2026-04-09 00:50:54 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:54.026717 | orchestrator | 2026-04-09 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:57.065658 | orchestrator | 2026-04-09 00:50:57 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:50:57.067248 | orchestrator | 2026-04-09 00:50:57 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:50:57.069084 | orchestrator | 2026-04-09 00:50:57 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:50:57.069138 | orchestrator | 2026-04-09 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:00.108259 | orchestrator | 2026-04-09 00:51:00 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:00.109090 | orchestrator | 2026-04-09 00:51:00 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:00.110625 | orchestrator | 2026-04-09 00:51:00 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:51:00.110666 | orchestrator | 2026-04-09 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:03.144025 | orchestrator | 2026-04-09 00:51:03 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:03.144117 | orchestrator | 2026-04-09 00:51:03 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:03.146418 | orchestrator | 2026-04-09 00:51:03 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:51:03.147733 | orchestrator | 2026-04-09 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:06.181584 | orchestrator | 2026-04-09 00:51:06 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:06.182498 | orchestrator | 2026-04-09 00:51:06 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:06.185984 | orchestrator | 2026-04-09 00:51:06 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:51:06.186092 | orchestrator | 2026-04-09 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:09.219759 | orchestrator | 2026-04-09 00:51:09 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:09.222726 | orchestrator | 2026-04-09 00:51:09 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:09.224175 | orchestrator | 2026-04-09 00:51:09 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:51:09.226093 | orchestrator | 2026-04-09 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:12.266351 | orchestrator | 2026-04-09 00:51:12 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:12.266437 | orchestrator | 2026-04-09 00:51:12 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:12.267910 | orchestrator | 2026-04-09 00:51:12 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state STARTED 2026-04-09 00:51:12.267958 | orchestrator | 2026-04-09 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:15.294702 | orchestrator | 2026-04-09 00:51:15 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:15.295088 | orchestrator | 2026-04-09 00:51:15 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:15.296766 | orchestrator | 2026-04-09 00:51:15.296814 | orchestrator | 2026-04-09 00:51:15 | INFO  | Task 446722a3-7204-4bdf-a5d9-963a6dd219e5 is in state SUCCESS 2026-04-09 00:51:15.298756 | orchestrator | 2026-04-09 00:51:15.298802 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:51:15.298811 | orchestrator | 2026-04-09 00:51:15.298819 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:51:15.298826 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:00.212) 0:00:00.212 ******** 2026-04-09 00:51:15.298835 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.298843 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.298850 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.298856 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:51:15.298863 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:51:15.298870 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:51:15.298877 | orchestrator | 2026-04-09 00:51:15.298884 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:51:15.298891 | orchestrator | Thursday 09 April 2026 00:48:55 +0000 (0:00:00.971) 0:00:01.183 ******** 2026-04-09 00:51:15.298898 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-09 00:51:15.298904 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-09 00:51:15.298911 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-09 00:51:15.298917 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-09 00:51:15.298924 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-09 00:51:15.298930 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-09 00:51:15.298937 | orchestrator | 2026-04-09 00:51:15.298943 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-09 00:51:15.298950 | orchestrator | 2026-04-09 00:51:15.298956 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-09 00:51:15.298962 | orchestrator | Thursday 09 April 2026 00:48:56 +0000 (0:00:01.700) 0:00:02.884 ******** 2026-04-09 00:51:15.298970 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:51:15.298978 | orchestrator | 2026-04-09 00:51:15.298985 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-09 00:51:15.298992 | orchestrator | Thursday 09 April 2026 00:48:58 +0000 (0:00:01.292) 0:00:04.177 ******** 2026-04-09 00:51:15.299001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299030 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299069 | orchestrator | 2026-04-09 00:51:15.299088 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-09 00:51:15.299095 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:01.592) 0:00:05.769 ******** 2026-04-09 00:51:15.299102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299130 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299149 | orchestrator | 2026-04-09 00:51:15.299155 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-09 00:51:15.299162 | orchestrator | Thursday 09 April 2026 00:49:01 +0000 (0:00:01.850) 0:00:07.620 ******** 2026-04-09 00:51:15.299171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299217 | orchestrator | 2026-04-09 00:51:15.299224 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-09 00:51:15.299230 | orchestrator | Thursday 09 April 2026 00:49:03 +0000 (0:00:01.530) 0:00:09.150 ******** 2026-04-09 00:51:15.299237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299338 | orchestrator | 2026-04-09 00:51:15.299349 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-09 00:51:15.299357 | orchestrator | Thursday 09 April 2026 00:49:04 +0000 (0:00:01.340) 0:00:10.490 ******** 2026-04-09 00:51:15.299366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.299477 | orchestrator | 2026-04-09 00:51:15.299485 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-09 00:51:15.299492 | orchestrator | Thursday 09 April 2026 00:49:05 +0000 (0:00:01.250) 0:00:11.741 ******** 2026-04-09 00:51:15.299497 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:15.299507 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:15.299513 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:15.299521 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:51:15.299529 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:51:15.299536 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:51:15.299544 | orchestrator | 2026-04-09 00:51:15.299551 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-09 00:51:15.299559 | orchestrator | Thursday 09 April 2026 00:49:08 +0000 (0:00:02.602) 0:00:14.344 ******** 2026-04-09 00:51:15.299566 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-09 00:51:15.299574 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-09 00:51:15.299582 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-09 00:51:15.299588 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-09 00:51:15.299596 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-09 00:51:15.299604 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-09 00:51:15.299612 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:51:15.299620 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:51:15.299632 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:51:15.299642 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:51:15.299650 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:51:15.299658 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-09 00:51:15.299666 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 00:51:15.299675 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 00:51:15.299683 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 00:51:15.299690 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 00:51:15.299697 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 00:51:15.299708 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-09 00:51:15.299716 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:51:15.299723 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:51:15.299730 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:51:15.299737 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:51:15.299744 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:51:15.299750 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-09 00:51:15.299757 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:51:15.299763 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:51:15.299770 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:51:15.299777 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:51:15.299784 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:51:15.299790 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-09 00:51:15.299798 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:51:15.299805 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:51:15.299811 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:51:15.299817 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:51:15.299824 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:51:15.299835 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-09 00:51:15.299842 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 00:51:15.299849 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 00:51:15.299855 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-09 00:51:15.299862 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 00:51:15.299869 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 00:51:15.299876 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-09 00:51:15.299882 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-09 00:51:15.299890 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-09 00:51:15.299900 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-09 00:51:15.299907 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-09 00:51:15.299918 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-09 00:51:15.299924 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-09 00:51:15.299929 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 00:51:15.299935 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 00:51:15.299941 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-09 00:51:15.299946 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 00:51:15.299952 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 00:51:15.299957 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-09 00:51:15.299962 | orchestrator | 2026-04-09 00:51:15.299968 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:51:15.299974 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:17.713) 0:00:32.057 ******** 2026-04-09 00:51:15.299979 | orchestrator | 2026-04-09 00:51:15.299986 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:51:15.299993 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.063) 0:00:32.120 ******** 2026-04-09 00:51:15.299999 | orchestrator | 2026-04-09 00:51:15.300006 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:51:15.300013 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.073) 0:00:32.193 ******** 2026-04-09 00:51:15.300019 | orchestrator | 2026-04-09 00:51:15.300025 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:51:15.300033 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.067) 0:00:32.261 ******** 2026-04-09 00:51:15.300040 | orchestrator | 2026-04-09 00:51:15.300047 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:51:15.300053 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.067) 0:00:32.328 ******** 2026-04-09 00:51:15.300060 | orchestrator | 2026-04-09 00:51:15.300067 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-09 00:51:15.300073 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.067) 0:00:32.396 ******** 2026-04-09 00:51:15.300080 | orchestrator | 2026-04-09 00:51:15.300087 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-09 00:51:15.300094 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.066) 0:00:32.462 ******** 2026-04-09 00:51:15.300102 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.300109 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.300116 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.300123 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:51:15.300130 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:51:15.300137 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:51:15.300143 | orchestrator | 2026-04-09 00:51:15.300151 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-09 00:51:15.300157 | orchestrator | Thursday 09 April 2026 00:49:29 +0000 (0:00:02.516) 0:00:34.979 ******** 2026-04-09 00:51:15.300164 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:15.300171 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:15.300178 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:51:15.300185 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:15.300197 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:51:15.300204 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:51:15.300216 | orchestrator | 2026-04-09 00:51:15.300223 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-09 00:51:15.300230 | orchestrator | 2026-04-09 00:51:15.300237 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 00:51:15.300243 | orchestrator | Thursday 09 April 2026 00:49:53 +0000 (0:00:24.759) 0:00:59.739 ******** 2026-04-09 00:51:15.300250 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:51:15.300257 | orchestrator | 2026-04-09 00:51:15.300294 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 00:51:15.300301 | orchestrator | Thursday 09 April 2026 00:49:55 +0000 (0:00:01.431) 0:01:01.170 ******** 2026-04-09 00:51:15.300308 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:51:15.300315 | orchestrator | 2026-04-09 00:51:15.300321 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-09 00:51:15.300328 | orchestrator | Thursday 09 April 2026 00:49:56 +0000 (0:00:01.588) 0:01:02.758 ******** 2026-04-09 00:51:15.300336 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.300343 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.300349 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.300356 | orchestrator | 2026-04-09 00:51:15.300363 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-09 00:51:15.300370 | orchestrator | Thursday 09 April 2026 00:49:57 +0000 (0:00:00.946) 0:01:03.704 ******** 2026-04-09 00:51:15.300377 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.300384 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.300391 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.300398 | orchestrator | 2026-04-09 00:51:15.300411 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-09 00:51:15.300419 | orchestrator | Thursday 09 April 2026 00:49:58 +0000 (0:00:00.420) 0:01:04.125 ******** 2026-04-09 00:51:15.300426 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.300434 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.300441 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.300447 | orchestrator | 2026-04-09 00:51:15.300455 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-09 00:51:15.300461 | orchestrator | Thursday 09 April 2026 00:49:58 +0000 (0:00:00.379) 0:01:04.504 ******** 2026-04-09 00:51:15.300467 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.300474 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.300480 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.300487 | orchestrator | 2026-04-09 00:51:15.300493 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-09 00:51:15.300499 | orchestrator | Thursday 09 April 2026 00:49:58 +0000 (0:00:00.238) 0:01:04.743 ******** 2026-04-09 00:51:15.300505 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.300511 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.300516 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.300522 | orchestrator | 2026-04-09 00:51:15.300528 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-09 00:51:15.300534 | orchestrator | Thursday 09 April 2026 00:49:59 +0000 (0:00:00.247) 0:01:04.991 ******** 2026-04-09 00:51:15.300541 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300548 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300556 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300563 | orchestrator | 2026-04-09 00:51:15.300569 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-09 00:51:15.300577 | orchestrator | Thursday 09 April 2026 00:49:59 +0000 (0:00:00.596) 0:01:05.587 ******** 2026-04-09 00:51:15.300584 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300591 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300597 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300604 | orchestrator | 2026-04-09 00:51:15.300611 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-09 00:51:15.300623 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:00.479) 0:01:06.066 ******** 2026-04-09 00:51:15.300631 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300638 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300644 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300651 | orchestrator | 2026-04-09 00:51:15.300658 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-09 00:51:15.300664 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:00.460) 0:01:06.527 ******** 2026-04-09 00:51:15.300671 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300678 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300684 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300692 | orchestrator | 2026-04-09 00:51:15.300698 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-09 00:51:15.300705 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:00.254) 0:01:06.781 ******** 2026-04-09 00:51:15.300712 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300718 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300724 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300731 | orchestrator | 2026-04-09 00:51:15.300738 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-09 00:51:15.300745 | orchestrator | Thursday 09 April 2026 00:50:01 +0000 (0:00:00.285) 0:01:07.066 ******** 2026-04-09 00:51:15.300751 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300758 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300764 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300771 | orchestrator | 2026-04-09 00:51:15.300777 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-09 00:51:15.300783 | orchestrator | Thursday 09 April 2026 00:50:01 +0000 (0:00:00.298) 0:01:07.365 ******** 2026-04-09 00:51:15.300790 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300796 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300803 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300809 | orchestrator | 2026-04-09 00:51:15.300815 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-09 00:51:15.300822 | orchestrator | Thursday 09 April 2026 00:50:01 +0000 (0:00:00.491) 0:01:07.857 ******** 2026-04-09 00:51:15.300832 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300839 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300846 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300853 | orchestrator | 2026-04-09 00:51:15.300859 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-09 00:51:15.300866 | orchestrator | Thursday 09 April 2026 00:50:02 +0000 (0:00:00.338) 0:01:08.196 ******** 2026-04-09 00:51:15.300873 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300879 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300886 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300892 | orchestrator | 2026-04-09 00:51:15.300899 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-09 00:51:15.300905 | orchestrator | Thursday 09 April 2026 00:50:02 +0000 (0:00:00.256) 0:01:08.452 ******** 2026-04-09 00:51:15.300911 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300917 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300924 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300930 | orchestrator | 2026-04-09 00:51:15.300936 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-09 00:51:15.300943 | orchestrator | Thursday 09 April 2026 00:50:03 +0000 (0:00:00.570) 0:01:09.023 ******** 2026-04-09 00:51:15.300950 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.300956 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.300963 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.300969 | orchestrator | 2026-04-09 00:51:15.300976 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-09 00:51:15.300988 | orchestrator | Thursday 09 April 2026 00:50:03 +0000 (0:00:00.439) 0:01:09.462 ******** 2026-04-09 00:51:15.300994 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.301001 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.301014 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.301021 | orchestrator | 2026-04-09 00:51:15.301027 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-09 00:51:15.301035 | orchestrator | Thursday 09 April 2026 00:50:03 +0000 (0:00:00.312) 0:01:09.774 ******** 2026-04-09 00:51:15.301042 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:51:15.301049 | orchestrator | 2026-04-09 00:51:15.301055 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-09 00:51:15.301061 | orchestrator | Thursday 09 April 2026 00:50:04 +0000 (0:00:00.867) 0:01:10.642 ******** 2026-04-09 00:51:15.301068 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.301075 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.301082 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.301088 | orchestrator | 2026-04-09 00:51:15.301094 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-09 00:51:15.301101 | orchestrator | Thursday 09 April 2026 00:50:05 +0000 (0:00:00.908) 0:01:11.550 ******** 2026-04-09 00:51:15.301107 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.301114 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.301121 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.301127 | orchestrator | 2026-04-09 00:51:15.301134 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-09 00:51:15.301141 | orchestrator | Thursday 09 April 2026 00:50:06 +0000 (0:00:00.758) 0:01:12.309 ******** 2026-04-09 00:51:15.301148 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.301155 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.301161 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.301168 | orchestrator | 2026-04-09 00:51:15.301174 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-09 00:51:15.301180 | orchestrator | Thursday 09 April 2026 00:50:06 +0000 (0:00:00.522) 0:01:12.832 ******** 2026-04-09 00:51:15.301187 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.301193 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.301199 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.301206 | orchestrator | 2026-04-09 00:51:15.301212 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-09 00:51:15.301219 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:00.299) 0:01:13.132 ******** 2026-04-09 00:51:15.301225 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.301231 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.301238 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.301244 | orchestrator | 2026-04-09 00:51:15.301251 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-09 00:51:15.301257 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:00.564) 0:01:13.696 ******** 2026-04-09 00:51:15.301313 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.301323 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.301330 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.301337 | orchestrator | 2026-04-09 00:51:15.301344 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-09 00:51:15.301351 | orchestrator | Thursday 09 April 2026 00:50:08 +0000 (0:00:00.553) 0:01:14.250 ******** 2026-04-09 00:51:15.301357 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.301364 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.301370 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.301377 | orchestrator | 2026-04-09 00:51:15.301383 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-09 00:51:15.301390 | orchestrator | Thursday 09 April 2026 00:50:08 +0000 (0:00:00.647) 0:01:14.898 ******** 2026-04-09 00:51:15.301403 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.301410 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.301417 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.301424 | orchestrator | 2026-04-09 00:51:15.301431 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-09 00:51:15.301437 | orchestrator | Thursday 09 April 2026 00:50:09 +0000 (0:00:00.276) 0:01:15.174 ******** 2026-04-09 00:51:15.301452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301533 | orchestrator | 2026-04-09 00:51:15.301540 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-09 00:51:15.301547 | orchestrator | Thursday 09 April 2026 00:50:11 +0000 (0:00:01.842) 0:01:17.017 ******** 2026-04-09 00:51:15.301555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301635 | orchestrator | 2026-04-09 00:51:15.301642 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-09 00:51:15.301648 | orchestrator | Thursday 09 April 2026 00:50:15 +0000 (0:00:04.159) 0:01:21.176 ******** 2026-04-09 00:51:15.301655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.301729 | orchestrator | 2026-04-09 00:51:15.301736 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:51:15.301744 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:02.342) 0:01:23.519 ******** 2026-04-09 00:51:15.301750 | orchestrator | 2026-04-09 00:51:15.301757 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:51:15.301764 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.063) 0:01:23.583 ******** 2026-04-09 00:51:15.301771 | orchestrator | 2026-04-09 00:51:15.301778 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:51:15.301785 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.071) 0:01:23.654 ******** 2026-04-09 00:51:15.301792 | orchestrator | 2026-04-09 00:51:15.301798 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-09 00:51:15.301805 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.070) 0:01:23.725 ******** 2026-04-09 00:51:15.301811 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:15.301818 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:15.301824 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:15.301831 | orchestrator | 2026-04-09 00:51:15.301837 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-09 00:51:15.301845 | orchestrator | Thursday 09 April 2026 00:50:26 +0000 (0:00:08.621) 0:01:32.347 ******** 2026-04-09 00:51:15.301851 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:15.301858 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:15.301864 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:15.301871 | orchestrator | 2026-04-09 00:51:15.301877 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-09 00:51:15.301884 | orchestrator | Thursday 09 April 2026 00:50:29 +0000 (0:00:03.132) 0:01:35.479 ******** 2026-04-09 00:51:15.301891 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:15.301897 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:15.301904 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:15.301910 | orchestrator | 2026-04-09 00:51:15.301917 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-09 00:51:15.301923 | orchestrator | Thursday 09 April 2026 00:50:36 +0000 (0:00:07.343) 0:01:42.822 ******** 2026-04-09 00:51:15.301930 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.301936 | orchestrator | 2026-04-09 00:51:15.301942 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-09 00:51:15.301952 | orchestrator | Thursday 09 April 2026 00:50:37 +0000 (0:00:00.122) 0:01:42.944 ******** 2026-04-09 00:51:15.301959 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.301965 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.301971 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.301978 | orchestrator | 2026-04-09 00:51:15.301984 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-09 00:51:15.301990 | orchestrator | Thursday 09 April 2026 00:50:37 +0000 (0:00:00.776) 0:01:43.720 ******** 2026-04-09 00:51:15.301997 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.302003 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.302010 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:15.302069 | orchestrator | 2026-04-09 00:51:15.302076 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-09 00:51:15.302084 | orchestrator | Thursday 09 April 2026 00:50:38 +0000 (0:00:00.726) 0:01:44.447 ******** 2026-04-09 00:51:15.302091 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.302098 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.302105 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.302112 | orchestrator | 2026-04-09 00:51:15.302118 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-09 00:51:15.302125 | orchestrator | Thursday 09 April 2026 00:50:39 +0000 (0:00:00.936) 0:01:45.384 ******** 2026-04-09 00:51:15.302131 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.302145 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.302152 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:15.302159 | orchestrator | 2026-04-09 00:51:15.302166 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-09 00:51:15.302173 | orchestrator | Thursday 09 April 2026 00:50:40 +0000 (0:00:00.601) 0:01:45.985 ******** 2026-04-09 00:51:15.302179 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.302186 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.302200 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.302207 | orchestrator | 2026-04-09 00:51:15.302215 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-09 00:51:15.302222 | orchestrator | Thursday 09 April 2026 00:50:40 +0000 (0:00:00.767) 0:01:46.753 ******** 2026-04-09 00:51:15.302228 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.302235 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.302242 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.302248 | orchestrator | 2026-04-09 00:51:15.302255 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-09 00:51:15.302280 | orchestrator | Thursday 09 April 2026 00:50:41 +0000 (0:00:00.726) 0:01:47.480 ******** 2026-04-09 00:51:15.302287 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.302295 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.302302 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.302309 | orchestrator | 2026-04-09 00:51:15.302317 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-09 00:51:15.302325 | orchestrator | Thursday 09 April 2026 00:50:42 +0000 (0:00:00.463) 0:01:47.943 ******** 2026-04-09 00:51:15.302332 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302341 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302349 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302357 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302367 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302378 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302391 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302400 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302416 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302424 | orchestrator | 2026-04-09 00:51:15.302431 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-09 00:51:15.302439 | orchestrator | Thursday 09 April 2026 00:50:43 +0000 (0:00:01.484) 0:01:49.428 ******** 2026-04-09 00:51:15.302446 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302453 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302462 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302469 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302494 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302519 | orchestrator | 2026-04-09 00:51:15.302526 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-09 00:51:15.302532 | orchestrator | Thursday 09 April 2026 00:50:47 +0000 (0:00:04.150) 0:01:53.578 ******** 2026-04-09 00:51:15.302546 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302555 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302562 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302578 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302608 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:51:15.302624 | orchestrator | 2026-04-09 00:51:15.302632 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:51:15.302639 | orchestrator | Thursday 09 April 2026 00:50:50 +0000 (0:00:02.939) 0:01:56.518 ******** 2026-04-09 00:51:15.302645 | orchestrator | 2026-04-09 00:51:15.302652 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:51:15.302659 | orchestrator | Thursday 09 April 2026 00:50:50 +0000 (0:00:00.070) 0:01:56.589 ******** 2026-04-09 00:51:15.302665 | orchestrator | 2026-04-09 00:51:15.302672 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-09 00:51:15.302678 | orchestrator | Thursday 09 April 2026 00:50:50 +0000 (0:00:00.198) 0:01:56.787 ******** 2026-04-09 00:51:15.302685 | orchestrator | 2026-04-09 00:51:15.302693 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-09 00:51:15.302700 | orchestrator | Thursday 09 April 2026 00:50:50 +0000 (0:00:00.060) 0:01:56.847 ******** 2026-04-09 00:51:15.302707 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:15.302714 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:15.302721 | orchestrator | 2026-04-09 00:51:15.302732 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-09 00:51:15.302740 | orchestrator | Thursday 09 April 2026 00:50:57 +0000 (0:00:06.127) 0:02:02.975 ******** 2026-04-09 00:51:15.302747 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:15.302754 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:15.302761 | orchestrator | 2026-04-09 00:51:15.302767 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-09 00:51:15.302774 | orchestrator | Thursday 09 April 2026 00:51:03 +0000 (0:00:06.108) 0:02:09.083 ******** 2026-04-09 00:51:15.302781 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:15.302788 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:15.302795 | orchestrator | 2026-04-09 00:51:15.302802 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-09 00:51:15.302808 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:06.427) 0:02:15.511 ******** 2026-04-09 00:51:15.302816 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:15.302823 | orchestrator | 2026-04-09 00:51:15.302830 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-09 00:51:15.302836 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:00.114) 0:02:15.626 ******** 2026-04-09 00:51:15.302843 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.302851 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.302858 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.302865 | orchestrator | 2026-04-09 00:51:15.302872 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-09 00:51:15.302879 | orchestrator | Thursday 09 April 2026 00:51:10 +0000 (0:00:00.809) 0:02:16.435 ******** 2026-04-09 00:51:15.302891 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.302898 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.302904 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:15.302911 | orchestrator | 2026-04-09 00:51:15.302918 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-09 00:51:15.302926 | orchestrator | Thursday 09 April 2026 00:51:11 +0000 (0:00:00.616) 0:02:17.052 ******** 2026-04-09 00:51:15.302932 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.302940 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.302947 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.302954 | orchestrator | 2026-04-09 00:51:15.302960 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-09 00:51:15.302967 | orchestrator | Thursday 09 April 2026 00:51:11 +0000 (0:00:00.796) 0:02:17.849 ******** 2026-04-09 00:51:15.302974 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:15.302982 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:15.302988 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:15.302995 | orchestrator | 2026-04-09 00:51:15.303002 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-09 00:51:15.303008 | orchestrator | Thursday 09 April 2026 00:51:12 +0000 (0:00:00.628) 0:02:18.477 ******** 2026-04-09 00:51:15.303015 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.303022 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.303029 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.303036 | orchestrator | 2026-04-09 00:51:15.303043 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-09 00:51:15.303050 | orchestrator | Thursday 09 April 2026 00:51:13 +0000 (0:00:00.753) 0:02:19.231 ******** 2026-04-09 00:51:15.303056 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:15.303063 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:15.303070 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:15.303076 | orchestrator | 2026-04-09 00:51:15.303083 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:51:15.303091 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-09 00:51:15.303099 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-09 00:51:15.303106 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-09 00:51:15.303117 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:51:15.303124 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:51:15.303131 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:51:15.303138 | orchestrator | 2026-04-09 00:51:15.303144 | orchestrator | 2026-04-09 00:51:15.303151 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:51:15.303159 | orchestrator | Thursday 09 April 2026 00:51:14 +0000 (0:00:01.039) 0:02:20.270 ******** 2026-04-09 00:51:15.303165 | orchestrator | =============================================================================== 2026-04-09 00:51:15.303173 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 24.76s 2026-04-09 00:51:15.303180 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.71s 2026-04-09 00:51:15.303187 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.75s 2026-04-09 00:51:15.303194 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.77s 2026-04-09 00:51:15.303201 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.24s 2026-04-09 00:51:15.303213 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.16s 2026-04-09 00:51:15.303220 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.15s 2026-04-09 00:51:15.303232 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.94s 2026-04-09 00:51:15.303239 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.60s 2026-04-09 00:51:15.303246 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.52s 2026-04-09 00:51:15.303253 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.34s 2026-04-09 00:51:15.303260 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.85s 2026-04-09 00:51:15.303311 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.84s 2026-04-09 00:51:15.303318 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.70s 2026-04-09 00:51:15.303326 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.59s 2026-04-09 00:51:15.303333 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.59s 2026-04-09 00:51:15.303340 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.53s 2026-04-09 00:51:15.303347 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2026-04-09 00:51:15.303354 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.43s 2026-04-09 00:51:15.303361 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.34s 2026-04-09 00:51:15.303368 | orchestrator | 2026-04-09 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:18.334685 | orchestrator | 2026-04-09 00:51:18 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:18.334930 | orchestrator | 2026-04-09 00:51:18 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:18.335027 | orchestrator | 2026-04-09 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:21.385671 | orchestrator | 2026-04-09 00:51:21 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:21.386055 | orchestrator | 2026-04-09 00:51:21 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:21.386088 | orchestrator | 2026-04-09 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:24.427237 | orchestrator | 2026-04-09 00:51:24 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:24.429554 | orchestrator | 2026-04-09 00:51:24 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:24.430041 | orchestrator | 2026-04-09 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:27.483763 | orchestrator | 2026-04-09 00:51:27 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:27.484763 | orchestrator | 2026-04-09 00:51:27 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:27.485878 | orchestrator | 2026-04-09 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:30.523234 | orchestrator | 2026-04-09 00:51:30 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:30.523357 | orchestrator | 2026-04-09 00:51:30 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:30.523364 | orchestrator | 2026-04-09 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:33.571470 | orchestrator | 2026-04-09 00:51:33 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:33.572435 | orchestrator | 2026-04-09 00:51:33 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:33.572522 | orchestrator | 2026-04-09 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:36.620845 | orchestrator | 2026-04-09 00:51:36 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:36.622930 | orchestrator | 2026-04-09 00:51:36 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:36.623038 | orchestrator | 2026-04-09 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:39.663826 | orchestrator | 2026-04-09 00:51:39 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:39.666221 | orchestrator | 2026-04-09 00:51:39 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:39.666378 | orchestrator | 2026-04-09 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:42.698544 | orchestrator | 2026-04-09 00:51:42 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:42.698612 | orchestrator | 2026-04-09 00:51:42 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:42.698618 | orchestrator | 2026-04-09 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:45.741103 | orchestrator | 2026-04-09 00:51:45 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:45.742167 | orchestrator | 2026-04-09 00:51:45 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:45.742511 | orchestrator | 2026-04-09 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:48.784514 | orchestrator | 2026-04-09 00:51:48 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:48.785014 | orchestrator | 2026-04-09 00:51:48 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:48.785050 | orchestrator | 2026-04-09 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:51.824496 | orchestrator | 2026-04-09 00:51:51 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:51.828672 | orchestrator | 2026-04-09 00:51:51 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:51.828759 | orchestrator | 2026-04-09 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:54.865475 | orchestrator | 2026-04-09 00:51:54 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:54.865572 | orchestrator | 2026-04-09 00:51:54 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:54.865981 | orchestrator | 2026-04-09 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:57.907650 | orchestrator | 2026-04-09 00:51:57 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:51:57.908928 | orchestrator | 2026-04-09 00:51:57 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:51:57.908987 | orchestrator | 2026-04-09 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:00.956972 | orchestrator | 2026-04-09 00:52:00 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:00.957550 | orchestrator | 2026-04-09 00:52:00 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:00.958845 | orchestrator | 2026-04-09 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:03.993643 | orchestrator | 2026-04-09 00:52:03 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:03.993758 | orchestrator | 2026-04-09 00:52:03 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:03.993770 | orchestrator | 2026-04-09 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:07.028224 | orchestrator | 2026-04-09 00:52:07 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:07.028385 | orchestrator | 2026-04-09 00:52:07 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:07.028398 | orchestrator | 2026-04-09 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:10.068105 | orchestrator | 2026-04-09 00:52:10 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:10.069980 | orchestrator | 2026-04-09 00:52:10 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:10.070325 | orchestrator | 2026-04-09 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:13.105653 | orchestrator | 2026-04-09 00:52:13 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:13.106247 | orchestrator | 2026-04-09 00:52:13 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:13.106290 | orchestrator | 2026-04-09 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:16.143376 | orchestrator | 2026-04-09 00:52:16 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:16.144453 | orchestrator | 2026-04-09 00:52:16 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:16.144517 | orchestrator | 2026-04-09 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:19.186966 | orchestrator | 2026-04-09 00:52:19 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:19.188056 | orchestrator | 2026-04-09 00:52:19 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:19.188102 | orchestrator | 2026-04-09 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:22.225437 | orchestrator | 2026-04-09 00:52:22 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:22.225946 | orchestrator | 2026-04-09 00:52:22 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:22.225971 | orchestrator | 2026-04-09 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:25.272321 | orchestrator | 2026-04-09 00:52:25 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:25.275702 | orchestrator | 2026-04-09 00:52:25 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:25.275805 | orchestrator | 2026-04-09 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:28.320538 | orchestrator | 2026-04-09 00:52:28 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:28.321819 | orchestrator | 2026-04-09 00:52:28 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:28.322010 | orchestrator | 2026-04-09 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:31.367284 | orchestrator | 2026-04-09 00:52:31 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:31.368622 | orchestrator | 2026-04-09 00:52:31 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:31.368941 | orchestrator | 2026-04-09 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:34.409696 | orchestrator | 2026-04-09 00:52:34 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:34.410870 | orchestrator | 2026-04-09 00:52:34 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:34.410907 | orchestrator | 2026-04-09 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:37.462058 | orchestrator | 2026-04-09 00:52:37 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:37.466730 | orchestrator | 2026-04-09 00:52:37 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:37.466973 | orchestrator | 2026-04-09 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:40.506220 | orchestrator | 2026-04-09 00:52:40 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:40.506653 | orchestrator | 2026-04-09 00:52:40 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:40.507485 | orchestrator | 2026-04-09 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:43.544634 | orchestrator | 2026-04-09 00:52:43 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:43.545549 | orchestrator | 2026-04-09 00:52:43 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:43.545592 | orchestrator | 2026-04-09 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:46.594003 | orchestrator | 2026-04-09 00:52:46 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:46.596039 | orchestrator | 2026-04-09 00:52:46 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:46.596171 | orchestrator | 2026-04-09 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:49.639040 | orchestrator | 2026-04-09 00:52:49 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:49.639501 | orchestrator | 2026-04-09 00:52:49 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:49.639541 | orchestrator | 2026-04-09 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:52.684221 | orchestrator | 2026-04-09 00:52:52 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:52.685980 | orchestrator | 2026-04-09 00:52:52 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:52.686113 | orchestrator | 2026-04-09 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:55.730239 | orchestrator | 2026-04-09 00:52:55 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:55.730329 | orchestrator | 2026-04-09 00:52:55 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:55.730338 | orchestrator | 2026-04-09 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:58.776132 | orchestrator | 2026-04-09 00:52:58 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:52:58.777934 | orchestrator | 2026-04-09 00:52:58 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:52:58.777996 | orchestrator | 2026-04-09 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:01.810774 | orchestrator | 2026-04-09 00:53:01 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:01.812179 | orchestrator | 2026-04-09 00:53:01 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:01.812245 | orchestrator | 2026-04-09 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:04.848902 | orchestrator | 2026-04-09 00:53:04 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:04.850174 | orchestrator | 2026-04-09 00:53:04 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:04.850290 | orchestrator | 2026-04-09 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:07.883984 | orchestrator | 2026-04-09 00:53:07 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:07.888481 | orchestrator | 2026-04-09 00:53:07 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:07.888558 | orchestrator | 2026-04-09 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:10.930198 | orchestrator | 2026-04-09 00:53:10 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:10.930965 | orchestrator | 2026-04-09 00:53:10 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:10.931003 | orchestrator | 2026-04-09 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:13.962315 | orchestrator | 2026-04-09 00:53:13 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:13.964417 | orchestrator | 2026-04-09 00:53:13 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:13.964499 | orchestrator | 2026-04-09 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:17.006592 | orchestrator | 2026-04-09 00:53:17 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:17.008998 | orchestrator | 2026-04-09 00:53:17 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:17.009114 | orchestrator | 2026-04-09 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:20.058249 | orchestrator | 2026-04-09 00:53:20 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:20.058357 | orchestrator | 2026-04-09 00:53:20 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:20.058374 | orchestrator | 2026-04-09 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:23.097521 | orchestrator | 2026-04-09 00:53:23 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:23.099468 | orchestrator | 2026-04-09 00:53:23 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:23.099539 | orchestrator | 2026-04-09 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:26.136152 | orchestrator | 2026-04-09 00:53:26 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:26.138232 | orchestrator | 2026-04-09 00:53:26 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:26.138296 | orchestrator | 2026-04-09 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:29.188539 | orchestrator | 2026-04-09 00:53:29 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:29.191687 | orchestrator | 2026-04-09 00:53:29 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:29.191763 | orchestrator | 2026-04-09 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:32.232308 | orchestrator | 2026-04-09 00:53:32 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:32.234007 | orchestrator | 2026-04-09 00:53:32 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:32.234114 | orchestrator | 2026-04-09 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:35.280403 | orchestrator | 2026-04-09 00:53:35 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:35.284170 | orchestrator | 2026-04-09 00:53:35 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:35.284269 | orchestrator | 2026-04-09 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:38.336548 | orchestrator | 2026-04-09 00:53:38 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:38.337859 | orchestrator | 2026-04-09 00:53:38 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:38.337996 | orchestrator | 2026-04-09 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:41.384884 | orchestrator | 2026-04-09 00:53:41 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:41.386561 | orchestrator | 2026-04-09 00:53:41 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:41.386642 | orchestrator | 2026-04-09 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:44.432695 | orchestrator | 2026-04-09 00:53:44 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:44.433350 | orchestrator | 2026-04-09 00:53:44 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state STARTED 2026-04-09 00:53:44.433387 | orchestrator | 2026-04-09 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:47.478890 | orchestrator | 2026-04-09 00:53:47 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:47.488871 | orchestrator | 2026-04-09 00:53:47 | INFO  | Task 923b3519-0680-4401-9465-ee0cafc64481 is in state SUCCESS 2026-04-09 00:53:47.489693 | orchestrator | 2026-04-09 00:53:47.491327 | orchestrator | 2026-04-09 00:53:47.491364 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:53:47.491374 | orchestrator | 2026-04-09 00:53:47.491380 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:53:47.491386 | orchestrator | Thursday 09 April 2026 00:47:52 +0000 (0:00:00.521) 0:00:00.521 ******** 2026-04-09 00:53:47.491391 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.491396 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.491401 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.491406 | orchestrator | 2026-04-09 00:53:47.491410 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:53:47.491415 | orchestrator | Thursday 09 April 2026 00:47:52 +0000 (0:00:00.452) 0:00:00.974 ******** 2026-04-09 00:53:47.491420 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-09 00:53:47.491425 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-09 00:53:47.491430 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-09 00:53:47.491434 | orchestrator | 2026-04-09 00:53:47.491439 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-09 00:53:47.491443 | orchestrator | 2026-04-09 00:53:47.491448 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-09 00:53:47.491452 | orchestrator | Thursday 09 April 2026 00:47:53 +0000 (0:00:00.731) 0:00:01.705 ******** 2026-04-09 00:53:47.491458 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.491462 | orchestrator | 2026-04-09 00:53:47.491467 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-09 00:53:47.491491 | orchestrator | Thursday 09 April 2026 00:47:54 +0000 (0:00:00.833) 0:00:02.539 ******** 2026-04-09 00:53:47.491495 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.491502 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.491509 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.491515 | orchestrator | 2026-04-09 00:53:47.491521 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-09 00:53:47.491539 | orchestrator | Thursday 09 April 2026 00:47:56 +0000 (0:00:02.281) 0:00:04.821 ******** 2026-04-09 00:53:47.491546 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.491552 | orchestrator | 2026-04-09 00:53:47.491558 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-09 00:53:47.491564 | orchestrator | Thursday 09 April 2026 00:47:57 +0000 (0:00:00.933) 0:00:05.754 ******** 2026-04-09 00:53:47.491570 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.491576 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.491583 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.491587 | orchestrator | 2026-04-09 00:53:47.491591 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-09 00:53:47.491595 | orchestrator | Thursday 09 April 2026 00:47:58 +0000 (0:00:01.071) 0:00:06.827 ******** 2026-04-09 00:53:47.491599 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:53:47.491603 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:53:47.491607 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:53:47.491610 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:53:47.491614 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:53:47.491618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:53:47.491622 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 00:53:47.491627 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 00:53:47.491631 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 00:53:47.491635 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 00:53:47.491639 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 00:53:47.491642 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 00:53:47.491702 | orchestrator | 2026-04-09 00:53:47.491706 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 00:53:47.491710 | orchestrator | Thursday 09 April 2026 00:48:01 +0000 (0:00:02.804) 0:00:09.631 ******** 2026-04-09 00:53:47.491714 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-09 00:53:47.491718 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-09 00:53:47.491729 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-09 00:53:47.491733 | orchestrator | 2026-04-09 00:53:47.491736 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 00:53:47.491740 | orchestrator | Thursday 09 April 2026 00:48:01 +0000 (0:00:00.866) 0:00:10.498 ******** 2026-04-09 00:53:47.491744 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-09 00:53:47.491748 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-09 00:53:47.491751 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-09 00:53:47.491755 | orchestrator | 2026-04-09 00:53:47.491759 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 00:53:47.491784 | orchestrator | Thursday 09 April 2026 00:48:04 +0000 (0:00:02.172) 0:00:12.670 ******** 2026-04-09 00:53:47.491796 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-09 00:53:47.491800 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.491813 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-09 00:53:47.491974 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.491980 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-09 00:53:47.491984 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.491988 | orchestrator | 2026-04-09 00:53:47.491992 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-09 00:53:47.491995 | orchestrator | Thursday 09 April 2026 00:48:04 +0000 (0:00:00.797) 0:00:13.467 ******** 2026-04-09 00:53:47.492002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.492087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.492094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.492098 | orchestrator | 2026-04-09 00:53:47.492102 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-09 00:53:47.492106 | orchestrator | Thursday 09 April 2026 00:48:07 +0000 (0:00:02.151) 0:00:15.618 ******** 2026-04-09 00:53:47.492110 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.492113 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.492117 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.492121 | orchestrator | 2026-04-09 00:53:47.492125 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-09 00:53:47.492129 | orchestrator | Thursday 09 April 2026 00:48:08 +0000 (0:00:00.961) 0:00:16.580 ******** 2026-04-09 00:53:47.492132 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-09 00:53:47.492136 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-09 00:53:47.492168 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-09 00:53:47.492173 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-09 00:53:47.492177 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-09 00:53:47.492181 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-09 00:53:47.492185 | orchestrator | 2026-04-09 00:53:47.492188 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-09 00:53:47.492221 | orchestrator | Thursday 09 April 2026 00:48:10 +0000 (0:00:02.183) 0:00:18.764 ******** 2026-04-09 00:53:47.492225 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.492228 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.492232 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.492236 | orchestrator | 2026-04-09 00:53:47.492240 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-09 00:53:47.492243 | orchestrator | Thursday 09 April 2026 00:48:11 +0000 (0:00:00.983) 0:00:19.747 ******** 2026-04-09 00:53:47.492251 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.492255 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.492259 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.492263 | orchestrator | 2026-04-09 00:53:47.492267 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-09 00:53:47.492270 | orchestrator | Thursday 09 April 2026 00:48:12 +0000 (0:00:01.284) 0:00:21.032 ******** 2026-04-09 00:53:47.492274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.492284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.492289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.492294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:53:47.492299 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.492306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.492310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.492318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.492323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:53:47.492327 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.492335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.492340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.492346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.492350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:53:47.492360 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.492607 | orchestrator | 2026-04-09 00:53:47.492626 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-09 00:53:47.492632 | orchestrator | Thursday 09 April 2026 00:48:13 +0000 (0:00:00.944) 0:00:21.976 ******** 2026-04-09 00:53:47.492640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.492702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:53:47.492769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.492784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:53:47.492936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.492952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.493000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd', '__omit_place_holder__b03911d93696e6c68984e99011b000133b7930cd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:53:47.493016 | orchestrator | 2026-04-09 00:53:47.493073 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-09 00:53:47.493078 | orchestrator | Thursday 09 April 2026 00:48:16 +0000 (0:00:03.281) 0:00:25.258 ******** 2026-04-09 00:53:47.493082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.493087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.493091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.493132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.493138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.493146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.493155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.493160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.493164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.493168 | orchestrator | 2026-04-09 00:53:47.493171 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-09 00:53:47.493175 | orchestrator | Thursday 09 April 2026 00:48:19 +0000 (0:00:02.897) 0:00:28.156 ******** 2026-04-09 00:53:47.493179 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 00:53:47.493184 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 00:53:47.493187 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 00:53:47.493191 | orchestrator | 2026-04-09 00:53:47.493195 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-09 00:53:47.493199 | orchestrator | Thursday 09 April 2026 00:48:21 +0000 (0:00:01.827) 0:00:29.983 ******** 2026-04-09 00:53:47.493202 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 00:53:47.493206 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 00:53:47.493210 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 00:53:47.493214 | orchestrator | 2026-04-09 00:53:47.493266 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-09 00:53:47.493272 | orchestrator | Thursday 09 April 2026 00:48:25 +0000 (0:00:03.709) 0:00:33.693 ******** 2026-04-09 00:53:47.493960 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.493971 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.493976 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.493980 | orchestrator | 2026-04-09 00:53:47.493984 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-09 00:53:47.493988 | orchestrator | Thursday 09 April 2026 00:48:26 +0000 (0:00:01.215) 0:00:34.908 ******** 2026-04-09 00:53:47.493993 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 00:53:47.493998 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 00:53:47.494009 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 00:53:47.494179 | orchestrator | 2026-04-09 00:53:47.494186 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-09 00:53:47.494190 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:01.582) 0:00:36.490 ******** 2026-04-09 00:53:47.494194 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 00:53:47.494198 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 00:53:47.494202 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 00:53:47.494206 | orchestrator | 2026-04-09 00:53:47.494209 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-09 00:53:47.494217 | orchestrator | Thursday 09 April 2026 00:48:29 +0000 (0:00:01.573) 0:00:38.064 ******** 2026-04-09 00:53:47.494222 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-09 00:53:47.494226 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-09 00:53:47.494230 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-09 00:53:47.494234 | orchestrator | 2026-04-09 00:53:47.494237 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-09 00:53:47.494241 | orchestrator | Thursday 09 April 2026 00:48:30 +0000 (0:00:01.336) 0:00:39.400 ******** 2026-04-09 00:53:47.494245 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-09 00:53:47.494249 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-09 00:53:47.494253 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-09 00:53:47.494256 | orchestrator | 2026-04-09 00:53:47.494260 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-09 00:53:47.494264 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:00:01.417) 0:00:40.817 ******** 2026-04-09 00:53:47.494268 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.494271 | orchestrator | 2026-04-09 00:53:47.494275 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-09 00:53:47.494279 | orchestrator | Thursday 09 April 2026 00:48:33 +0000 (0:00:00.829) 0:00:41.647 ******** 2026-04-09 00:53:47.494284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.494289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.494309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.494320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.494326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.494331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.494335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.495321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.495361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.495376 | orchestrator | 2026-04-09 00:53:47.495380 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-09 00:53:47.495385 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:03.273) 0:00:44.920 ******** 2026-04-09 00:53:47.495433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.495440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.495448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.495452 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.495456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.495984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.495995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.495999 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.496011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.496410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.496438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.496443 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.496447 | orchestrator | 2026-04-09 00:53:47.496451 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-09 00:53:47.496456 | orchestrator | Thursday 09 April 2026 00:48:37 +0000 (0:00:01.124) 0:00:46.045 ******** 2026-04-09 00:53:47.496467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.496471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.496475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.496487 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.496492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.496680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.496691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.496696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.496704 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.496708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.496712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.496716 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.496720 | orchestrator | 2026-04-09 00:53:47.496724 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-09 00:53:47.496728 | orchestrator | Thursday 09 April 2026 00:48:38 +0000 (0:00:01.155) 0:00:47.201 ******** 2026-04-09 00:53:47.496738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.496776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.496782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.496786 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.496790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.496797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.496802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.496837 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.496840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.496849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.496901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.496908 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.496912 | orchestrator | 2026-04-09 00:53:47.496916 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-09 00:53:47.496920 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:00.746) 0:00:47.948 ******** 2026-04-09 00:53:47.496924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.496931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.496936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.496940 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.496944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.496952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.496956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.496960 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.496991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.496998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.497012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.497042 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.497046 | orchestrator | 2026-04-09 00:53:47.497050 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-09 00:53:47.497054 | orchestrator | Thursday 09 April 2026 00:48:40 +0000 (0:00:01.105) 0:00:49.053 ******** 2026-04-09 00:53:47.497058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.497067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.497071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.497075 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.497106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.497112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.497116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.497120 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.497124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.497135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.497160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.497165 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.497169 | orchestrator | 2026-04-09 00:53:47.497173 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-09 00:53:47.497176 | orchestrator | Thursday 09 April 2026 00:48:43 +0000 (0:00:02.611) 0:00:51.665 ******** 2026-04-09 00:53:47.497180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.497212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.497217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.497221 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.497228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.497710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.497719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.497723 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.497727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.497784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.497791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.497794 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.497798 | orchestrator | 2026-04-09 00:53:47.497802 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-09 00:53:47.497807 | orchestrator | Thursday 09 April 2026 00:48:44 +0000 (0:00:01.395) 0:00:53.061 ******** 2026-04-09 00:53:47.497814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.497828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.497833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.497837 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.497841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.497844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.497880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.497885 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.497889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.497898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.497902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.497908 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.497914 | orchestrator | 2026-04-09 00:53:47.497920 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-09 00:53:47.497925 | orchestrator | Thursday 09 April 2026 00:48:45 +0000 (0:00:00.724) 0:00:53.786 ******** 2026-04-09 00:53:47.497931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.497937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.498010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.498059 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.498096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.498106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.498113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.498117 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.498121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:53:47.498125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:53:47.498129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:53:47.498133 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.498139 | orchestrator | 2026-04-09 00:53:47.498144 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-09 00:53:47.498151 | orchestrator | Thursday 09 April 2026 00:48:46 +0000 (0:00:01.315) 0:00:55.102 ******** 2026-04-09 00:53:47.498156 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 00:53:47.498161 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 00:53:47.498192 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 00:53:47.498198 | orchestrator | 2026-04-09 00:53:47.498205 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-09 00:53:47.498209 | orchestrator | Thursday 09 April 2026 00:48:48 +0000 (0:00:01.760) 0:00:56.862 ******** 2026-04-09 00:53:47.498213 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 00:53:47.498216 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 00:53:47.498220 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 00:53:47.498224 | orchestrator | 2026-04-09 00:53:47.498228 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-09 00:53:47.498232 | orchestrator | Thursday 09 April 2026 00:48:49 +0000 (0:00:01.477) 0:00:58.340 ******** 2026-04-09 00:53:47.498235 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 00:53:47.498239 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 00:53:47.498243 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 00:53:47.498247 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.498251 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 00:53:47.498255 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 00:53:47.498258 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.498262 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 00:53:47.498266 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.498270 | orchestrator | 2026-04-09 00:53:47.498275 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-09 00:53:47.498711 | orchestrator | Thursday 09 April 2026 00:48:51 +0000 (0:00:01.468) 0:00:59.809 ******** 2026-04-09 00:53:47.498732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.498737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.498741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:53:47.498806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.498812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.498879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:53:47.498887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.498892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.498896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:53:47.498900 | orchestrator | 2026-04-09 00:53:47.498904 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-09 00:53:47.498908 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:03.178) 0:01:02.987 ******** 2026-04-09 00:53:47.498912 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.498916 | orchestrator | 2026-04-09 00:53:47.498920 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-09 00:53:47.498927 | orchestrator | Thursday 09 April 2026 00:48:55 +0000 (0:00:01.094) 0:01:04.082 ******** 2026-04-09 00:53:47.498932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 00:53:47.498969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.498975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.498981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.498985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 00:53:47.499499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.499532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-09 00:53:47.499559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.499563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499573 | orchestrator | 2026-04-09 00:53:47.499577 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-09 00:53:47.499582 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:04.292) 0:01:08.374 ******** 2026-04-09 00:53:47.499586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 00:53:47.499595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.499599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499607 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.499612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 00:53:47.499616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.499622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499630 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.499638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-09 00:53:47.499642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.499648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499658 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.499662 | orchestrator | 2026-04-09 00:53:47.499666 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-09 00:53:47.499670 | orchestrator | Thursday 09 April 2026 00:49:00 +0000 (0:00:00.858) 0:01:09.232 ******** 2026-04-09 00:53:47.499674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-09 00:53:47.499679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-09 00:53:47.499683 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.499687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-09 00:53:47.499691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-09 00:53:47.499695 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.499699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-09 00:53:47.499702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-09 00:53:47.499706 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.499710 | orchestrator | 2026-04-09 00:53:47.499717 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-09 00:53:47.499721 | orchestrator | Thursday 09 April 2026 00:49:01 +0000 (0:00:00.834) 0:01:10.067 ******** 2026-04-09 00:53:47.499725 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.499729 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.499732 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.499736 | orchestrator | 2026-04-09 00:53:47.499740 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-09 00:53:47.499744 | orchestrator | Thursday 09 April 2026 00:49:03 +0000 (0:00:01.542) 0:01:11.609 ******** 2026-04-09 00:53:47.499747 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.499751 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.499755 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.499759 | orchestrator | 2026-04-09 00:53:47.499762 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-09 00:53:47.499766 | orchestrator | Thursday 09 April 2026 00:49:05 +0000 (0:00:01.941) 0:01:13.550 ******** 2026-04-09 00:53:47.499770 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.499774 | orchestrator | 2026-04-09 00:53:47.499777 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-09 00:53:47.499781 | orchestrator | Thursday 09 April 2026 00:49:05 +0000 (0:00:00.721) 0:01:14.272 ******** 2026-04-09 00:53:47.499787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.499793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.499813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.499833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499840 | orchestrator | 2026-04-09 00:53:47.499844 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-09 00:53:47.499848 | orchestrator | Thursday 09 April 2026 00:49:09 +0000 (0:00:03.604) 0:01:17.877 ******** 2026-04-09 00:53:47.499855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.499859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499871 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.499875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.499879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499887 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.499894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.499898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.499912 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.499915 | orchestrator | 2026-04-09 00:53:47.499919 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-09 00:53:47.499923 | orchestrator | Thursday 09 April 2026 00:49:10 +0000 (0:00:01.439) 0:01:19.316 ******** 2026-04-09 00:53:47.499927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500052 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.500077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500086 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.500089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500097 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.500101 | orchestrator | 2026-04-09 00:53:47.500105 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-09 00:53:47.500108 | orchestrator | Thursday 09 April 2026 00:49:11 +0000 (0:00:00.697) 0:01:20.013 ******** 2026-04-09 00:53:47.500112 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.500116 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.500120 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.500124 | orchestrator | 2026-04-09 00:53:47.500128 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-09 00:53:47.500133 | orchestrator | Thursday 09 April 2026 00:49:12 +0000 (0:00:01.155) 0:01:21.169 ******** 2026-04-09 00:53:47.500137 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.500142 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.500149 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.500154 | orchestrator | 2026-04-09 00:53:47.500161 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-09 00:53:47.500166 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:01.742) 0:01:22.912 ******** 2026-04-09 00:53:47.500170 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.500175 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.500179 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.500183 | orchestrator | 2026-04-09 00:53:47.500188 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-09 00:53:47.500192 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:00.263) 0:01:23.175 ******** 2026-04-09 00:53:47.500196 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.500201 | orchestrator | 2026-04-09 00:53:47.500205 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-09 00:53:47.500210 | orchestrator | Thursday 09 April 2026 00:49:15 +0000 (0:00:00.687) 0:01:23.863 ******** 2026-04-09 00:53:47.500229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 00:53:47.500235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 00:53:47.500240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 00:53:47.500244 | orchestrator | 2026-04-09 00:53:47.500249 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-09 00:53:47.500253 | orchestrator | Thursday 09 April 2026 00:49:17 +0000 (0:00:02.373) 0:01:26.236 ******** 2026-04-09 00:53:47.500261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 00:53:47.500269 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.500274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 00:53:47.500278 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.500284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 00:53:47.500290 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.500297 | orchestrator | 2026-04-09 00:53:47.500303 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-09 00:53:47.500308 | orchestrator | Thursday 09 April 2026 00:49:19 +0000 (0:00:01.319) 0:01:27.556 ******** 2026-04-09 00:53:47.500313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:53:47.500320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:53:47.500326 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.500330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:53:47.500337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:53:47.500342 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.500350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:53:47.500354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:53:47.500359 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.500363 | orchestrator | 2026-04-09 00:53:47.500367 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-09 00:53:47.500372 | orchestrator | Thursday 09 April 2026 00:49:20 +0000 (0:00:01.660) 0:01:29.216 ******** 2026-04-09 00:53:47.500376 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.500380 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.500384 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.500389 | orchestrator | 2026-04-09 00:53:47.500393 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-09 00:53:47.500397 | orchestrator | Thursday 09 April 2026 00:49:21 +0000 (0:00:00.394) 0:01:29.611 ******** 2026-04-09 00:53:47.500402 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.500406 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.500410 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.500415 | orchestrator | 2026-04-09 00:53:47.500419 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-09 00:53:47.500423 | orchestrator | Thursday 09 April 2026 00:49:22 +0000 (0:00:01.059) 0:01:30.670 ******** 2026-04-09 00:53:47.500427 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.500432 | orchestrator | 2026-04-09 00:53:47.500436 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-09 00:53:47.500442 | orchestrator | Thursday 09 April 2026 00:49:23 +0000 (0:00:00.902) 0:01:31.573 ******** 2026-04-09 00:53:47.500447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.500456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.500469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.500506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500522 | orchestrator | 2026-04-09 00:53:47.500526 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-09 00:53:47.500531 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:03.410) 0:01:34.984 ******** 2026-04-09 00:53:47.500535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.500539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500554 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.500573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.500579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500598 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.500605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.500613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500632 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.500635 | orchestrator | 2026-04-09 00:53:47.500639 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-09 00:53:47.500643 | orchestrator | Thursday 09 April 2026 00:49:27 +0000 (0:00:00.999) 0:01:35.983 ******** 2026-04-09 00:53:47.500647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500655 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.500659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500667 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.500673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-09 00:53:47.500708 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.500712 | orchestrator | 2026-04-09 00:53:47.500716 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-09 00:53:47.500720 | orchestrator | Thursday 09 April 2026 00:49:28 +0000 (0:00:01.171) 0:01:37.155 ******** 2026-04-09 00:53:47.500724 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.500727 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.500731 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.500735 | orchestrator | 2026-04-09 00:53:47.500739 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-09 00:53:47.500742 | orchestrator | Thursday 09 April 2026 00:49:30 +0000 (0:00:01.477) 0:01:38.632 ******** 2026-04-09 00:53:47.500746 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.500750 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.500758 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.500761 | orchestrator | 2026-04-09 00:53:47.500765 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-09 00:53:47.500769 | orchestrator | Thursday 09 April 2026 00:49:32 +0000 (0:00:02.295) 0:01:40.928 ******** 2026-04-09 00:53:47.500773 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.500776 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.500780 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.500784 | orchestrator | 2026-04-09 00:53:47.500787 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-09 00:53:47.500791 | orchestrator | Thursday 09 April 2026 00:49:32 +0000 (0:00:00.345) 0:01:41.274 ******** 2026-04-09 00:53:47.500795 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.500799 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.500804 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.500808 | orchestrator | 2026-04-09 00:53:47.500812 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-09 00:53:47.500816 | orchestrator | Thursday 09 April 2026 00:49:33 +0000 (0:00:00.323) 0:01:41.598 ******** 2026-04-09 00:53:47.500819 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.500823 | orchestrator | 2026-04-09 00:53:47.500827 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-09 00:53:47.500830 | orchestrator | Thursday 09 April 2026 00:49:34 +0000 (0:00:00.986) 0:01:42.584 ******** 2026-04-09 00:53:47.500834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 00:53:47.500839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:53:47.500843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 00:53:47.500867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.500874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:53:47.501973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 00:53:47.502062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:53:47.502097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502127 | orchestrator | 2026-04-09 00:53:47.502131 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-09 00:53:47.502136 | orchestrator | Thursday 09 April 2026 00:49:38 +0000 (0:00:04.367) 0:01:46.952 ******** 2026-04-09 00:53:47.502140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 00:53:47.502182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 00:53:47.502188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:53:47.502194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:53:47.502202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502270 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.502275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502278 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.502309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 00:53:47.502315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:53:47.502321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.502372 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.502376 | orchestrator | 2026-04-09 00:53:47.502380 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-09 00:53:47.502384 | orchestrator | Thursday 09 April 2026 00:49:39 +0000 (0:00:00.937) 0:01:47.890 ******** 2026-04-09 00:53:47.502388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-09 00:53:47.502393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-09 00:53:47.502398 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.502402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-09 00:53:47.502406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-09 00:53:47.502410 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.502416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-09 00:53:47.502420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-09 00:53:47.502423 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.502427 | orchestrator | 2026-04-09 00:53:47.502431 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-09 00:53:47.502435 | orchestrator | Thursday 09 April 2026 00:49:40 +0000 (0:00:01.423) 0:01:49.314 ******** 2026-04-09 00:53:47.502439 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.502442 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.502446 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.502450 | orchestrator | 2026-04-09 00:53:47.502454 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-09 00:53:47.502457 | orchestrator | Thursday 09 April 2026 00:49:42 +0000 (0:00:01.325) 0:01:50.639 ******** 2026-04-09 00:53:47.502461 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.502465 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.502471 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.502475 | orchestrator | 2026-04-09 00:53:47.502479 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-09 00:53:47.502483 | orchestrator | Thursday 09 April 2026 00:49:43 +0000 (0:00:01.709) 0:01:52.348 ******** 2026-04-09 00:53:47.502486 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.502490 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.502494 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.502498 | orchestrator | 2026-04-09 00:53:47.502501 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-09 00:53:47.502505 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:00.253) 0:01:52.602 ******** 2026-04-09 00:53:47.502509 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.502513 | orchestrator | 2026-04-09 00:53:47.502516 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-09 00:53:47.502520 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:00.853) 0:01:53.455 ******** 2026-04-09 00:53:47.502571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 00:53:47.502581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.502611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 00:53:47.502642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 00:53:47.502647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.502685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.502691 | orchestrator | 2026-04-09 00:53:47.502697 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-09 00:53:47.502701 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:04.010) 0:01:57.466 ******** 2026-04-09 00:53:47.502709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 00:53:47.502742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.502750 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.502760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 00:53:47.502795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 00:53:47.502804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.502811 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.502841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.502847 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.502851 | orchestrator | 2026-04-09 00:53:47.502855 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-09 00:53:47.502859 | orchestrator | Thursday 09 April 2026 00:49:52 +0000 (0:00:03.774) 0:02:01.240 ******** 2026-04-09 00:53:47.502863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:53:47.502868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:53:47.502875 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.502881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:53:47.502885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:53:47.502889 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.502893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:53:47.502897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:53:47.502901 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.502904 | orchestrator | 2026-04-09 00:53:47.502908 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-09 00:53:47.502912 | orchestrator | Thursday 09 April 2026 00:49:57 +0000 (0:00:04.877) 0:02:06.118 ******** 2026-04-09 00:53:47.502916 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.502920 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.502923 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.502927 | orchestrator | 2026-04-09 00:53:47.502931 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-09 00:53:47.502935 | orchestrator | Thursday 09 April 2026 00:49:59 +0000 (0:00:01.986) 0:02:08.104 ******** 2026-04-09 00:53:47.502938 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.502942 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.502973 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.502978 | orchestrator | 2026-04-09 00:53:47.502982 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-09 00:53:47.502993 | orchestrator | Thursday 09 April 2026 00:50:01 +0000 (0:00:02.112) 0:02:10.217 ******** 2026-04-09 00:53:47.502998 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.503001 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.503005 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.503009 | orchestrator | 2026-04-09 00:53:47.503013 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-09 00:53:47.503017 | orchestrator | Thursday 09 April 2026 00:50:02 +0000 (0:00:00.601) 0:02:10.818 ******** 2026-04-09 00:53:47.503091 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.503095 | orchestrator | 2026-04-09 00:53:47.503099 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-09 00:53:47.503108 | orchestrator | Thursday 09 April 2026 00:50:03 +0000 (0:00:01.306) 0:02:12.124 ******** 2026-04-09 00:53:47.503112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 00:53:47.503121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 00:53:47.503125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 00:53:47.503129 | orchestrator | 2026-04-09 00:53:47.503133 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-09 00:53:47.503136 | orchestrator | Thursday 09 April 2026 00:50:08 +0000 (0:00:04.563) 0:02:16.688 ******** 2026-04-09 00:53:47.503140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 00:53:47.503144 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.503185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 00:53:47.503193 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.503197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 00:53:47.503201 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.503205 | orchestrator | 2026-04-09 00:53:47.503209 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-09 00:53:47.503213 | orchestrator | Thursday 09 April 2026 00:50:08 +0000 (0:00:00.527) 0:02:17.215 ******** 2026-04-09 00:53:47.503217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-09 00:53:47.503221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-09 00:53:47.503227 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.503231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-09 00:53:47.503235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-09 00:53:47.503239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-09 00:53:47.503243 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.503247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-09 00:53:47.503250 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.503254 | orchestrator | 2026-04-09 00:53:47.503258 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-09 00:53:47.503262 | orchestrator | Thursday 09 April 2026 00:50:09 +0000 (0:00:00.797) 0:02:18.013 ******** 2026-04-09 00:53:47.503265 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.503269 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.503273 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.503277 | orchestrator | 2026-04-09 00:53:47.503281 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-09 00:53:47.503284 | orchestrator | Thursday 09 April 2026 00:50:10 +0000 (0:00:01.355) 0:02:19.368 ******** 2026-04-09 00:53:47.503288 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.503292 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.503296 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.503299 | orchestrator | 2026-04-09 00:53:47.503303 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-09 00:53:47.503307 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:02.270) 0:02:21.639 ******** 2026-04-09 00:53:47.503311 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.503315 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.503318 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.503327 | orchestrator | 2026-04-09 00:53:47.503331 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-09 00:53:47.503335 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:00.395) 0:02:22.034 ******** 2026-04-09 00:53:47.503339 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.503343 | orchestrator | 2026-04-09 00:53:47.503346 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-09 00:53:47.503350 | orchestrator | Thursday 09 April 2026 00:50:14 +0000 (0:00:01.107) 0:02:23.141 ******** 2026-04-09 00:53:47.503390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:53:47.503397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:53:47.503427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:53:47.503432 | orchestrator | 2026-04-09 00:53:47.503436 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-09 00:53:47.503440 | orchestrator | Thursday 09 April 2026 00:50:18 +0000 (0:00:03.566) 0:02:26.707 ******** 2026-04-09 00:53:47.503462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:53:47.503470 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.503477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:53:47.503481 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.503540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:53:47.503550 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.503554 | orchestrator | 2026-04-09 00:53:47.503558 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-09 00:53:47.503561 | orchestrator | Thursday 09 April 2026 00:50:18 +0000 (0:00:00.604) 0:02:27.312 ******** 2026-04-09 00:53:47.503566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 00:53:47.503571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:53:47.503579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 00:53:47.503583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:53:47.503588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 00:53:47.503592 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.503596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 00:53:47.503603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:53:47.503607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 00:53:47.503610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:53:47.503614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 00:53:47.503618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 00:53:47.503622 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.503654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:53:47.503660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-09 00:53:47.503664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:53:47.503667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 00:53:47.503671 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.503675 | orchestrator | 2026-04-09 00:53:47.503679 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-09 00:53:47.503683 | orchestrator | Thursday 09 April 2026 00:50:19 +0000 (0:00:00.934) 0:02:28.246 ******** 2026-04-09 00:53:47.503687 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.503690 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.503694 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.503698 | orchestrator | 2026-04-09 00:53:47.503702 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-09 00:53:47.503705 | orchestrator | Thursday 09 April 2026 00:50:21 +0000 (0:00:01.575) 0:02:29.822 ******** 2026-04-09 00:53:47.503709 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.503713 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.503723 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.503730 | orchestrator | 2026-04-09 00:53:47.503734 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-09 00:53:47.503738 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:02.008) 0:02:31.831 ******** 2026-04-09 00:53:47.503742 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.503746 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.503749 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.503753 | orchestrator | 2026-04-09 00:53:47.503770 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-09 00:53:47.503774 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:00.300) 0:02:32.132 ******** 2026-04-09 00:53:47.503778 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.503781 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.503785 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.503789 | orchestrator | 2026-04-09 00:53:47.503793 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-09 00:53:47.503797 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:00.281) 0:02:32.413 ******** 2026-04-09 00:53:47.503800 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.503804 | orchestrator | 2026-04-09 00:53:47.503808 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-09 00:53:47.503812 | orchestrator | Thursday 09 April 2026 00:50:25 +0000 (0:00:01.121) 0:02:33.535 ******** 2026-04-09 00:53:47.503816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:53:47.503850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:53:47.503862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:53:47.503869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:53:47.503877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:53:47.503881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:53:47.503885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:53:47.503909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:53:47.503914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:53:47.503922 | orchestrator | 2026-04-09 00:53:47.503926 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-09 00:53:47.503930 | orchestrator | Thursday 09 April 2026 00:50:28 +0000 (0:00:03.523) 0:02:37.059 ******** 2026-04-09 00:53:47.503936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:53:47.503941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:53:47.503945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:53:47.503949 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.503979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:53:47.503987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:53:47.504003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:53:47.504014 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.504037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:53:47.504044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:53:47.504049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:53:47.504055 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.504060 | orchestrator | 2026-04-09 00:53:47.504109 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-09 00:53:47.504118 | orchestrator | Thursday 09 April 2026 00:50:29 +0000 (0:00:00.587) 0:02:37.646 ******** 2026-04-09 00:53:47.504124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 00:53:47.504149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 00:53:47.504155 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.504161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 00:53:47.504167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 00:53:47.504173 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.504182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 00:53:47.504188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-09 00:53:47.504194 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.504200 | orchestrator | 2026-04-09 00:53:47.504206 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-09 00:53:47.504212 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:01.039) 0:02:38.686 ******** 2026-04-09 00:53:47.504217 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.504223 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.504229 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.504234 | orchestrator | 2026-04-09 00:53:47.504240 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-09 00:53:47.504246 | orchestrator | Thursday 09 April 2026 00:50:31 +0000 (0:00:01.399) 0:02:40.085 ******** 2026-04-09 00:53:47.504252 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.504257 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.504263 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.504269 | orchestrator | 2026-04-09 00:53:47.504275 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-09 00:53:47.504280 | orchestrator | Thursday 09 April 2026 00:50:33 +0000 (0:00:01.954) 0:02:42.039 ******** 2026-04-09 00:53:47.504286 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.504292 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.504297 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.504311 | orchestrator | 2026-04-09 00:53:47.504317 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-09 00:53:47.504323 | orchestrator | Thursday 09 April 2026 00:50:33 +0000 (0:00:00.272) 0:02:42.311 ******** 2026-04-09 00:53:47.504329 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.504335 | orchestrator | 2026-04-09 00:53:47.504341 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-09 00:53:47.504346 | orchestrator | Thursday 09 April 2026 00:50:34 +0000 (0:00:01.027) 0:02:43.339 ******** 2026-04-09 00:53:47.504352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 00:53:47.504407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 00:53:47.504426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 00:53:47.504434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504444 | orchestrator | 2026-04-09 00:53:47.504448 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-09 00:53:47.504452 | orchestrator | Thursday 09 April 2026 00:50:38 +0000 (0:00:03.284) 0:02:46.623 ******** 2026-04-09 00:53:47.504486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 00:53:47.504494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504498 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.504502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 00:53:47.504506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 00:53:47.504541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504547 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.504551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504554 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.504558 | orchestrator | 2026-04-09 00:53:47.504562 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-09 00:53:47.504566 | orchestrator | Thursday 09 April 2026 00:50:38 +0000 (0:00:00.586) 0:02:47.210 ******** 2026-04-09 00:53:47.504571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-09 00:53:47.504578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-09 00:53:47.504582 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.504586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-09 00:53:47.504589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-09 00:53:47.504593 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.504597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-09 00:53:47.504601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-09 00:53:47.504605 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.504608 | orchestrator | 2026-04-09 00:53:47.504612 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-09 00:53:47.504616 | orchestrator | Thursday 09 April 2026 00:50:39 +0000 (0:00:00.971) 0:02:48.181 ******** 2026-04-09 00:53:47.504623 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.504627 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.504631 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.504635 | orchestrator | 2026-04-09 00:53:47.504638 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-09 00:53:47.504642 | orchestrator | Thursday 09 April 2026 00:50:41 +0000 (0:00:01.446) 0:02:49.628 ******** 2026-04-09 00:53:47.504646 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.504650 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.504654 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.504657 | orchestrator | 2026-04-09 00:53:47.504661 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-09 00:53:47.504665 | orchestrator | Thursday 09 April 2026 00:50:43 +0000 (0:00:02.164) 0:02:51.792 ******** 2026-04-09 00:53:47.504669 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.504672 | orchestrator | 2026-04-09 00:53:47.504676 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-09 00:53:47.504680 | orchestrator | Thursday 09 April 2026 00:50:44 +0000 (0:00:01.076) 0:02:52.868 ******** 2026-04-09 00:53:47.504720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 00:53:47.504727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 00:53:47.504746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-09 00:53:47.504787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504802 | orchestrator | 2026-04-09 00:53:47.504806 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-09 00:53:47.504810 | orchestrator | Thursday 09 April 2026 00:50:48 +0000 (0:00:03.856) 0:02:56.725 ******** 2026-04-09 00:53:47.504832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 00:53:47.504837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504888 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.504892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 00:53:47.504897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504938 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.504942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-09 00:53:47.504952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.504964 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.504967 | orchestrator | 2026-04-09 00:53:47.504971 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-09 00:53:47.504975 | orchestrator | Thursday 09 April 2026 00:50:48 +0000 (0:00:00.662) 0:02:57.387 ******** 2026-04-09 00:53:47.504979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-09 00:53:47.504983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-09 00:53:47.504986 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-09 00:53:47.505162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-09 00:53:47.505176 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-09 00:53:47.505196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-09 00:53:47.505204 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505208 | orchestrator | 2026-04-09 00:53:47.505213 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-09 00:53:47.505223 | orchestrator | Thursday 09 April 2026 00:50:49 +0000 (0:00:00.900) 0:02:58.287 ******** 2026-04-09 00:53:47.505227 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.505231 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.505235 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.505238 | orchestrator | 2026-04-09 00:53:47.505242 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-09 00:53:47.505246 | orchestrator | Thursday 09 April 2026 00:50:51 +0000 (0:00:01.363) 0:02:59.650 ******** 2026-04-09 00:53:47.505250 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.505254 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.505257 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.505261 | orchestrator | 2026-04-09 00:53:47.505265 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-09 00:53:47.505269 | orchestrator | Thursday 09 April 2026 00:50:53 +0000 (0:00:02.018) 0:03:01.668 ******** 2026-04-09 00:53:47.505276 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.505280 | orchestrator | 2026-04-09 00:53:47.505284 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-09 00:53:47.505288 | orchestrator | Thursday 09 April 2026 00:50:54 +0000 (0:00:01.145) 0:03:02.814 ******** 2026-04-09 00:53:47.505292 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:53:47.505296 | orchestrator | 2026-04-09 00:53:47.505299 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-09 00:53:47.505303 | orchestrator | Thursday 09 April 2026 00:50:57 +0000 (0:00:03.274) 0:03:06.088 ******** 2026-04-09 00:53:47.505309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:53:47.505383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:53:47.505393 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:53:47.505404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:53:47.505408 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:53:47.505442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:53:47.505446 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505450 | orchestrator | 2026-04-09 00:53:47.505454 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-09 00:53:47.505458 | orchestrator | Thursday 09 April 2026 00:50:59 +0000 (0:00:02.064) 0:03:08.153 ******** 2026-04-09 00:53:47.505464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:53:47.505469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:53:47.505472 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:53:47.505517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:53:47.505521 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:53:47.505560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:53:47.505566 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505570 | orchestrator | 2026-04-09 00:53:47.505574 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-09 00:53:47.505577 | orchestrator | Thursday 09 April 2026 00:51:01 +0000 (0:00:02.200) 0:03:10.353 ******** 2026-04-09 00:53:47.505581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:53:47.505588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:53:47.505592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:53:47.505596 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:53:47.505604 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:53:47.505641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:53:47.505647 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505651 | orchestrator | 2026-04-09 00:53:47.505655 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-09 00:53:47.505659 | orchestrator | Thursday 09 April 2026 00:51:04 +0000 (0:00:02.511) 0:03:12.865 ******** 2026-04-09 00:53:47.505663 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.505666 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.505670 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.505674 | orchestrator | 2026-04-09 00:53:47.505678 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-09 00:53:47.505682 | orchestrator | Thursday 09 April 2026 00:51:06 +0000 (0:00:01.844) 0:03:14.710 ******** 2026-04-09 00:53:47.505685 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505689 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505693 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505697 | orchestrator | 2026-04-09 00:53:47.505700 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-09 00:53:47.505704 | orchestrator | Thursday 09 April 2026 00:51:07 +0000 (0:00:01.447) 0:03:16.158 ******** 2026-04-09 00:53:47.505720 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505724 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505728 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505732 | orchestrator | 2026-04-09 00:53:47.505735 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-09 00:53:47.505739 | orchestrator | Thursday 09 April 2026 00:51:07 +0000 (0:00:00.267) 0:03:16.426 ******** 2026-04-09 00:53:47.505743 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.505747 | orchestrator | 2026-04-09 00:53:47.505751 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-09 00:53:47.505754 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:01.187) 0:03:17.613 ******** 2026-04-09 00:53:47.505763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:53:47.505768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:53:47.505775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:53:47.505779 | orchestrator | 2026-04-09 00:53:47.505783 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-09 00:53:47.505787 | orchestrator | Thursday 09 April 2026 00:51:10 +0000 (0:00:01.464) 0:03:19.078 ******** 2026-04-09 00:53:47.505820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:53:47.505826 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:53:47.505834 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:53:47.505844 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505848 | orchestrator | 2026-04-09 00:53:47.505852 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-09 00:53:47.505859 | orchestrator | Thursday 09 April 2026 00:51:10 +0000 (0:00:00.369) 0:03:19.448 ******** 2026-04-09 00:53:47.505863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 00:53:47.505868 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 00:53:47.505883 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 00:53:47.505891 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505895 | orchestrator | 2026-04-09 00:53:47.505898 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-09 00:53:47.505902 | orchestrator | Thursday 09 April 2026 00:51:11 +0000 (0:00:00.810) 0:03:20.258 ******** 2026-04-09 00:53:47.505906 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505910 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505914 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505917 | orchestrator | 2026-04-09 00:53:47.505921 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-09 00:53:47.505925 | orchestrator | Thursday 09 April 2026 00:51:12 +0000 (0:00:00.383) 0:03:20.642 ******** 2026-04-09 00:53:47.505929 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505933 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505936 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505940 | orchestrator | 2026-04-09 00:53:47.505944 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-09 00:53:47.505948 | orchestrator | Thursday 09 April 2026 00:51:13 +0000 (0:00:01.172) 0:03:21.814 ******** 2026-04-09 00:53:47.505952 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.505955 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.505959 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.505963 | orchestrator | 2026-04-09 00:53:47.505994 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-09 00:53:47.506000 | orchestrator | Thursday 09 April 2026 00:51:13 +0000 (0:00:00.354) 0:03:22.169 ******** 2026-04-09 00:53:47.506004 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.506007 | orchestrator | 2026-04-09 00:53:47.506069 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-09 00:53:47.506075 | orchestrator | Thursday 09 April 2026 00:51:14 +0000 (0:00:01.261) 0:03:23.431 ******** 2026-04-09 00:53:47.506079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 00:53:47.506091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 00:53:47.506143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.506172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 00:53:47.506227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 00:53:47.506278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.506282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 00:53:47.506287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.506310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 00:53:47.506385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.506390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.506483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.506516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.506532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.506597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.506608 | orchestrator | 2026-04-09 00:53:47.506612 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-09 00:53:47.506616 | orchestrator | Thursday 09 April 2026 00:51:18 +0000 (0:00:03.914) 0:03:27.345 ******** 2026-04-09 00:53:47.506620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 00:53:47.506627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 00:53:47.506668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 00:53:47.506708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.506736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 00:53:47.506817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.506832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.506844 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.506881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.506898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 00:53:47.506902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.506974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.506992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-09 00:53:47.507074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.507083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.507100 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.507108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.507114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.507126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.507246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-09 00:53:47.507276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-09 00:53:47.507280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:53:47.507308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:53:47.507313 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.507317 | orchestrator | 2026-04-09 00:53:47.507328 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-09 00:53:47.507332 | orchestrator | Thursday 09 April 2026 00:51:20 +0000 (0:00:01.630) 0:03:28.976 ******** 2026-04-09 00:53:47.507336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-09 00:53:47.507341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-09 00:53:47.507346 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.507350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-09 00:53:47.507353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-09 00:53:47.507357 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.507364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-09 00:53:47.507368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-09 00:53:47.507371 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.507375 | orchestrator | 2026-04-09 00:53:47.507379 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-09 00:53:47.507383 | orchestrator | Thursday 09 April 2026 00:51:21 +0000 (0:00:01.330) 0:03:30.306 ******** 2026-04-09 00:53:47.507390 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.507394 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.507397 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.507401 | orchestrator | 2026-04-09 00:53:47.507405 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-09 00:53:47.507414 | orchestrator | Thursday 09 April 2026 00:51:23 +0000 (0:00:01.277) 0:03:31.583 ******** 2026-04-09 00:53:47.507418 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.507421 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.507425 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.507429 | orchestrator | 2026-04-09 00:53:47.507433 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-09 00:53:47.507436 | orchestrator | Thursday 09 April 2026 00:51:24 +0000 (0:00:01.910) 0:03:33.494 ******** 2026-04-09 00:53:47.507440 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.507444 | orchestrator | 2026-04-09 00:53:47.507448 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-09 00:53:47.507451 | orchestrator | Thursday 09 April 2026 00:51:26 +0000 (0:00:01.236) 0:03:34.730 ******** 2026-04-09 00:53:47.507456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.507474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.507478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.507482 | orchestrator | 2026-04-09 00:53:47.507486 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-09 00:53:47.507497 | orchestrator | Thursday 09 April 2026 00:51:29 +0000 (0:00:03.084) 0:03:37.815 ******** 2026-04-09 00:53:47.507501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.507505 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.507509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.507513 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.507528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.507533 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.507536 | orchestrator | 2026-04-09 00:53:47.507540 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-09 00:53:47.507544 | orchestrator | Thursday 09 April 2026 00:51:29 +0000 (0:00:00.499) 0:03:38.314 ******** 2026-04-09 00:53:47.507548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507557 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.507560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507571 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.507577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507585 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.507589 | orchestrator | 2026-04-09 00:53:47.507592 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-09 00:53:47.507596 | orchestrator | Thursday 09 April 2026 00:51:31 +0000 (0:00:01.327) 0:03:39.642 ******** 2026-04-09 00:53:47.507600 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.507604 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.507607 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.507611 | orchestrator | 2026-04-09 00:53:47.507615 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-09 00:53:47.507618 | orchestrator | Thursday 09 April 2026 00:51:32 +0000 (0:00:01.392) 0:03:41.035 ******** 2026-04-09 00:53:47.507622 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.507626 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.507630 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.507633 | orchestrator | 2026-04-09 00:53:47.507637 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-09 00:53:47.507641 | orchestrator | Thursday 09 April 2026 00:51:34 +0000 (0:00:02.129) 0:03:43.164 ******** 2026-04-09 00:53:47.507645 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.507649 | orchestrator | 2026-04-09 00:53:47.507652 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-09 00:53:47.507664 | orchestrator | Thursday 09 April 2026 00:51:36 +0000 (0:00:01.465) 0:03:44.629 ******** 2026-04-09 00:53:47.507670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.507688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.507709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.507742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507754 | orchestrator | 2026-04-09 00:53:47.507759 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-09 00:53:47.507763 | orchestrator | Thursday 09 April 2026 00:51:40 +0000 (0:00:04.104) 0:03:48.734 ******** 2026-04-09 00:53:47.507768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.507773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507798 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.507805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.507809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507817 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.507822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.507841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.507850 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.507854 | orchestrator | 2026-04-09 00:53:47.507859 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-09 00:53:47.507865 | orchestrator | Thursday 09 April 2026 00:51:40 +0000 (0:00:00.553) 0:03:49.288 ******** 2026-04-09 00:53:47.507874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507899 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.507905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507928 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.507934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-09 00:53:47.507981 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.507988 | orchestrator | 2026-04-09 00:53:47.507994 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-09 00:53:47.508000 | orchestrator | Thursday 09 April 2026 00:51:41 +0000 (0:00:00.813) 0:03:50.102 ******** 2026-04-09 00:53:47.508006 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.508013 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.508035 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.508039 | orchestrator | 2026-04-09 00:53:47.508043 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-09 00:53:47.508047 | orchestrator | Thursday 09 April 2026 00:51:43 +0000 (0:00:01.669) 0:03:51.772 ******** 2026-04-09 00:53:47.508050 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.508054 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.508058 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.508062 | orchestrator | 2026-04-09 00:53:47.508065 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-09 00:53:47.508069 | orchestrator | Thursday 09 April 2026 00:51:45 +0000 (0:00:02.046) 0:03:53.818 ******** 2026-04-09 00:53:47.508073 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.508077 | orchestrator | 2026-04-09 00:53:47.508081 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-09 00:53:47.508084 | orchestrator | Thursday 09 April 2026 00:51:46 +0000 (0:00:01.271) 0:03:55.089 ******** 2026-04-09 00:53:47.508088 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-09 00:53:47.508092 | orchestrator | 2026-04-09 00:53:47.508096 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-09 00:53:47.508100 | orchestrator | Thursday 09 April 2026 00:51:47 +0000 (0:00:01.214) 0:03:56.304 ******** 2026-04-09 00:53:47.508107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 00:53:47.508112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 00:53:47.508116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 00:53:47.508124 | orchestrator | 2026-04-09 00:53:47.508128 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-09 00:53:47.508132 | orchestrator | Thursday 09 April 2026 00:51:51 +0000 (0:00:03.496) 0:03:59.800 ******** 2026-04-09 00:53:47.508136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:53:47.508140 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.508144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:53:47.508162 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.508166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:53:47.508170 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.508174 | orchestrator | 2026-04-09 00:53:47.508178 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-09 00:53:47.508181 | orchestrator | Thursday 09 April 2026 00:51:52 +0000 (0:00:01.155) 0:04:00.956 ******** 2026-04-09 00:53:47.508185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:53:47.508189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:53:47.508194 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.508201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:53:47.508205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:53:47.508209 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.508213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:53:47.508220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:53:47.508224 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.508228 | orchestrator | 2026-04-09 00:53:47.508232 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 00:53:47.508235 | orchestrator | Thursday 09 April 2026 00:51:54 +0000 (0:00:01.586) 0:04:02.542 ******** 2026-04-09 00:53:47.508239 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.508243 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.508247 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.508250 | orchestrator | 2026-04-09 00:53:47.508254 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 00:53:47.508258 | orchestrator | Thursday 09 April 2026 00:51:56 +0000 (0:00:02.282) 0:04:04.824 ******** 2026-04-09 00:53:47.508262 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.508265 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.508269 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.508273 | orchestrator | 2026-04-09 00:53:47.508277 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-09 00:53:47.508281 | orchestrator | Thursday 09 April 2026 00:51:59 +0000 (0:00:02.990) 0:04:07.815 ******** 2026-04-09 00:53:47.508285 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-09 00:53:47.508289 | orchestrator | 2026-04-09 00:53:47.508292 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-09 00:53:47.508296 | orchestrator | Thursday 09 April 2026 00:52:00 +0000 (0:00:00.820) 0:04:08.636 ******** 2026-04-09 00:53:47.508300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:53:47.508304 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.508321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:53:47.508325 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.508329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:53:47.508333 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.508337 | orchestrator | 2026-04-09 00:53:47.508344 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-09 00:53:47.508348 | orchestrator | Thursday 09 April 2026 00:52:01 +0000 (0:00:01.219) 0:04:09.855 ******** 2026-04-09 00:53:47.508354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:53:47.508358 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.508362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:53:47.508366 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.508370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:53:47.508374 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.508378 | orchestrator | 2026-04-09 00:53:47.508382 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-09 00:53:47.508385 | orchestrator | Thursday 09 April 2026 00:52:02 +0000 (0:00:01.348) 0:04:11.203 ******** 2026-04-09 00:53:47.508389 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.508393 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.508397 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.508400 | orchestrator | 2026-04-09 00:53:47.508404 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 00:53:47.508408 | orchestrator | Thursday 09 April 2026 00:52:03 +0000 (0:00:01.144) 0:04:12.348 ******** 2026-04-09 00:53:47.508412 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.508416 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.508420 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.508423 | orchestrator | 2026-04-09 00:53:47.508427 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 00:53:47.508431 | orchestrator | Thursday 09 April 2026 00:52:06 +0000 (0:00:02.411) 0:04:14.759 ******** 2026-04-09 00:53:47.508435 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.508438 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.508442 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.508446 | orchestrator | 2026-04-09 00:53:47.508450 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-09 00:53:47.508454 | orchestrator | Thursday 09 April 2026 00:52:09 +0000 (0:00:02.960) 0:04:17.719 ******** 2026-04-09 00:53:47.508458 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-09 00:53:47.508461 | orchestrator | 2026-04-09 00:53:47.508477 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-09 00:53:47.508481 | orchestrator | Thursday 09 April 2026 00:52:09 +0000 (0:00:00.728) 0:04:18.448 ******** 2026-04-09 00:53:47.508488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:53:47.508492 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.508496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:53:47.508500 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.508506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:53:47.508510 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.508514 | orchestrator | 2026-04-09 00:53:47.508518 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-09 00:53:47.508522 | orchestrator | Thursday 09 April 2026 00:52:11 +0000 (0:00:01.140) 0:04:19.589 ******** 2026-04-09 00:53:47.508526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:53:47.508530 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.508534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:53:47.508538 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.508541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:53:47.508545 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.508554 | orchestrator | 2026-04-09 00:53:47.508558 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-09 00:53:47.508562 | orchestrator | Thursday 09 April 2026 00:52:12 +0000 (0:00:01.070) 0:04:20.659 ******** 2026-04-09 00:53:47.508566 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.508569 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.508573 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.508577 | orchestrator | 2026-04-09 00:53:47.508581 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 00:53:47.508596 | orchestrator | Thursday 09 April 2026 00:52:13 +0000 (0:00:01.384) 0:04:22.044 ******** 2026-04-09 00:53:47.508600 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.508604 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.508608 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.508612 | orchestrator | 2026-04-09 00:53:47.508616 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 00:53:47.508620 | orchestrator | Thursday 09 April 2026 00:52:15 +0000 (0:00:02.313) 0:04:24.358 ******** 2026-04-09 00:53:47.508623 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.508627 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.508631 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.508635 | orchestrator | 2026-04-09 00:53:47.508639 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-09 00:53:47.508642 | orchestrator | Thursday 09 April 2026 00:52:18 +0000 (0:00:03.132) 0:04:27.490 ******** 2026-04-09 00:53:47.508646 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.508650 | orchestrator | 2026-04-09 00:53:47.508654 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-09 00:53:47.508657 | orchestrator | Thursday 09 April 2026 00:52:20 +0000 (0:00:01.215) 0:04:28.706 ******** 2026-04-09 00:53:47.508664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.508669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.508673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:53:47.508680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:53:47.508696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.508716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.508738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.508743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:53:47.508747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.508762 | orchestrator | 2026-04-09 00:53:47.508766 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-09 00:53:47.508773 | orchestrator | Thursday 09 April 2026 00:52:23 +0000 (0:00:03.333) 0:04:32.040 ******** 2026-04-09 00:53:47.508777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.508781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:53:47.508796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.508811 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.508815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.508823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:53:47.508827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.508862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:53:47.508873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.508890 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.508894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:53:47.508913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:53:47.508917 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.508921 | orchestrator | 2026-04-09 00:53:47.508925 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-09 00:53:47.508929 | orchestrator | Thursday 09 April 2026 00:52:24 +0000 (0:00:00.894) 0:04:32.934 ******** 2026-04-09 00:53:47.508933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:53:47.508937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:53:47.508941 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.508945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:53:47.508948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:53:47.508952 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.508956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:53:47.508963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:53:47.508970 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.508973 | orchestrator | 2026-04-09 00:53:47.508977 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-09 00:53:47.508981 | orchestrator | Thursday 09 April 2026 00:52:25 +0000 (0:00:00.823) 0:04:33.758 ******** 2026-04-09 00:53:47.508985 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.508988 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.508992 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.508996 | orchestrator | 2026-04-09 00:53:47.509000 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-09 00:53:47.509003 | orchestrator | Thursday 09 April 2026 00:52:26 +0000 (0:00:01.332) 0:04:35.090 ******** 2026-04-09 00:53:47.509007 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.509011 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.509057 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.509061 | orchestrator | 2026-04-09 00:53:47.509065 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-09 00:53:47.509069 | orchestrator | Thursday 09 April 2026 00:52:28 +0000 (0:00:02.153) 0:04:37.243 ******** 2026-04-09 00:53:47.509072 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.509076 | orchestrator | 2026-04-09 00:53:47.509080 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-09 00:53:47.509084 | orchestrator | Thursday 09 April 2026 00:52:30 +0000 (0:00:01.413) 0:04:38.657 ******** 2026-04-09 00:53:47.509088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:53:47.509106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:53:47.509111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:53:47.509119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:53:47.509125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:53:47.509141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:53:47.509146 | orchestrator | 2026-04-09 00:53:47.509150 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-09 00:53:47.509154 | orchestrator | Thursday 09 April 2026 00:52:34 +0000 (0:00:04.694) 0:04:43.352 ******** 2026-04-09 00:53:47.509222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 00:53:47.509242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 00:53:47.509246 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.509250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 00:53:47.509269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 00:53:47.509274 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.509278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 00:53:47.509288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 00:53:47.509292 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.509296 | orchestrator | 2026-04-09 00:53:47.509300 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-09 00:53:47.509304 | orchestrator | Thursday 09 April 2026 00:52:35 +0000 (0:00:00.779) 0:04:44.131 ******** 2026-04-09 00:53:47.509308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-09 00:53:47.509312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 00:53:47.509317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 00:53:47.509321 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.509334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-09 00:53:47.509338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 00:53:47.509343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 00:53:47.509346 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.509350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-09 00:53:47.509354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 00:53:47.509370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-09 00:53:47.509378 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.509382 | orchestrator | 2026-04-09 00:53:47.509385 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-09 00:53:47.509389 | orchestrator | Thursday 09 April 2026 00:52:36 +0000 (0:00:01.144) 0:04:45.276 ******** 2026-04-09 00:53:47.509393 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.509397 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.509401 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.509404 | orchestrator | 2026-04-09 00:53:47.509408 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-09 00:53:47.509412 | orchestrator | Thursday 09 April 2026 00:52:37 +0000 (0:00:00.412) 0:04:45.688 ******** 2026-04-09 00:53:47.509416 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.509419 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.509423 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.509427 | orchestrator | 2026-04-09 00:53:47.509430 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-09 00:53:47.509434 | orchestrator | Thursday 09 April 2026 00:52:38 +0000 (0:00:01.108) 0:04:46.797 ******** 2026-04-09 00:53:47.509438 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.509442 | orchestrator | 2026-04-09 00:53:47.509445 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-09 00:53:47.509449 | orchestrator | Thursday 09 April 2026 00:52:39 +0000 (0:00:01.476) 0:04:48.274 ******** 2026-04-09 00:53:47.509456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 00:53:47.509461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:53:47.509465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 00:53:47.509493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:53:47.509503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 00:53:47.509536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:53:47.509541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 00:53:47.509560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 00:53:47.509570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 00:53:47.509575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 00:53:47.509589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 00:53:47.509620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 00:53:47.509624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509639 | orchestrator | 2026-04-09 00:53:47.509643 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-09 00:53:47.509646 | orchestrator | Thursday 09 April 2026 00:52:43 +0000 (0:00:03.825) 0:04:52.099 ******** 2026-04-09 00:53:47.509652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-09 00:53:47.509657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:53:47.509661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-09 00:53:47.509684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 00:53:47.509688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-09 00:53:47.509692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:53:47.509702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509717 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.509721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-09 00:53:47.509738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 00:53:47.509744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509756 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.509762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-09 00:53:47.509767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:53:47.509771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-09 00:53:47.509794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-09 00:53:47.509798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:53:47.509812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:53:47.509824 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.509828 | orchestrator | 2026-04-09 00:53:47.509832 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-09 00:53:47.509836 | orchestrator | Thursday 09 April 2026 00:52:44 +0000 (0:00:00.789) 0:04:52.889 ******** 2026-04-09 00:53:47.509840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-09 00:53:47.509844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-09 00:53:47.509849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 00:53:47.509854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 00:53:47.509858 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.509862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-09 00:53:47.509866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-09 00:53:47.509870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 00:53:47.509874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 00:53:47.509878 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.509885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-09 00:53:47.509889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-09 00:53:47.509893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 00:53:47.509897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-09 00:53:47.509900 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.509904 | orchestrator | 2026-04-09 00:53:47.509908 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-09 00:53:47.509914 | orchestrator | Thursday 09 April 2026 00:52:45 +0000 (0:00:01.133) 0:04:54.022 ******** 2026-04-09 00:53:47.509918 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.509922 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.509926 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.509929 | orchestrator | 2026-04-09 00:53:47.509933 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-09 00:53:47.509939 | orchestrator | Thursday 09 April 2026 00:52:45 +0000 (0:00:00.398) 0:04:54.421 ******** 2026-04-09 00:53:47.509943 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.509947 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.509950 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.509954 | orchestrator | 2026-04-09 00:53:47.509958 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-09 00:53:47.509961 | orchestrator | Thursday 09 April 2026 00:52:47 +0000 (0:00:01.117) 0:04:55.539 ******** 2026-04-09 00:53:47.509965 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.509969 | orchestrator | 2026-04-09 00:53:47.509973 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-09 00:53:47.509976 | orchestrator | Thursday 09 April 2026 00:52:48 +0000 (0:00:01.314) 0:04:56.853 ******** 2026-04-09 00:53:47.509980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:53:47.509985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:53:47.509991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:53:47.509999 | orchestrator | 2026-04-09 00:53:47.510003 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-09 00:53:47.510006 | orchestrator | Thursday 09 April 2026 00:52:50 +0000 (0:00:02.249) 0:04:59.102 ******** 2026-04-09 00:53:47.510077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:53:47.510084 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:53:47.510092 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:53:47.510100 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510104 | orchestrator | 2026-04-09 00:53:47.510111 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-09 00:53:47.510115 | orchestrator | Thursday 09 April 2026 00:52:50 +0000 (0:00:00.359) 0:04:59.462 ******** 2026-04-09 00:53:47.510119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 00:53:47.510128 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 00:53:47.510135 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 00:53:47.510143 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510147 | orchestrator | 2026-04-09 00:53:47.510150 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-09 00:53:47.510154 | orchestrator | Thursday 09 April 2026 00:52:51 +0000 (0:00:00.600) 0:05:00.063 ******** 2026-04-09 00:53:47.510165 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510169 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510173 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510176 | orchestrator | 2026-04-09 00:53:47.510180 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-09 00:53:47.510184 | orchestrator | Thursday 09 April 2026 00:52:52 +0000 (0:00:00.634) 0:05:00.698 ******** 2026-04-09 00:53:47.510188 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510191 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510195 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510199 | orchestrator | 2026-04-09 00:53:47.510203 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-09 00:53:47.510206 | orchestrator | Thursday 09 April 2026 00:52:53 +0000 (0:00:01.152) 0:05:01.851 ******** 2026-04-09 00:53:47.510213 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:47.510216 | orchestrator | 2026-04-09 00:53:47.510220 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-09 00:53:47.510224 | orchestrator | Thursday 09 April 2026 00:52:54 +0000 (0:00:01.400) 0:05:03.251 ******** 2026-04-09 00:53:47.510228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.510233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.510244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.510249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.510256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.510260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-09 00:53:47.510264 | orchestrator | 2026-04-09 00:53:47.510268 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-09 00:53:47.510272 | orchestrator | Thursday 09 April 2026 00:53:00 +0000 (0:00:05.457) 0:05:08.709 ******** 2026-04-09 00:53:47.510281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.510286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.510290 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.510300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.510304 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.510321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-09 00:53:47.510325 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510329 | orchestrator | 2026-04-09 00:53:47.510333 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-09 00:53:47.510336 | orchestrator | Thursday 09 April 2026 00:53:01 +0000 (0:00:00.897) 0:05:09.606 ******** 2026-04-09 00:53:47.510340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510367 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510382 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-09 00:53:47.510402 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510405 | orchestrator | 2026-04-09 00:53:47.510409 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-09 00:53:47.510413 | orchestrator | Thursday 09 April 2026 00:53:01 +0000 (0:00:00.847) 0:05:10.453 ******** 2026-04-09 00:53:47.510417 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.510421 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.510425 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.510428 | orchestrator | 2026-04-09 00:53:47.510434 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-09 00:53:47.510438 | orchestrator | Thursday 09 April 2026 00:53:03 +0000 (0:00:01.340) 0:05:11.794 ******** 2026-04-09 00:53:47.510442 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.510446 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.510449 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.510453 | orchestrator | 2026-04-09 00:53:47.510457 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-09 00:53:47.510461 | orchestrator | Thursday 09 April 2026 00:53:05 +0000 (0:00:02.091) 0:05:13.885 ******** 2026-04-09 00:53:47.510464 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510468 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510472 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510476 | orchestrator | 2026-04-09 00:53:47.510480 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-09 00:53:47.510483 | orchestrator | Thursday 09 April 2026 00:53:05 +0000 (0:00:00.500) 0:05:14.386 ******** 2026-04-09 00:53:47.510487 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510491 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510495 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510499 | orchestrator | 2026-04-09 00:53:47.510502 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-09 00:53:47.510506 | orchestrator | Thursday 09 April 2026 00:53:06 +0000 (0:00:00.263) 0:05:14.650 ******** 2026-04-09 00:53:47.510510 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510514 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510518 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510521 | orchestrator | 2026-04-09 00:53:47.510525 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-09 00:53:47.510529 | orchestrator | Thursday 09 April 2026 00:53:06 +0000 (0:00:00.277) 0:05:14.928 ******** 2026-04-09 00:53:47.510533 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510536 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510540 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510544 | orchestrator | 2026-04-09 00:53:47.510548 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-09 00:53:47.510557 | orchestrator | Thursday 09 April 2026 00:53:06 +0000 (0:00:00.246) 0:05:15.174 ******** 2026-04-09 00:53:47.510561 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510564 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510568 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510572 | orchestrator | 2026-04-09 00:53:47.510576 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-09 00:53:47.510580 | orchestrator | Thursday 09 April 2026 00:53:07 +0000 (0:00:00.546) 0:05:15.721 ******** 2026-04-09 00:53:47.510583 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510587 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510591 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510595 | orchestrator | 2026-04-09 00:53:47.510598 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-09 00:53:47.510602 | orchestrator | Thursday 09 April 2026 00:53:07 +0000 (0:00:00.531) 0:05:16.253 ******** 2026-04-09 00:53:47.510606 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.510610 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.510614 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.510617 | orchestrator | 2026-04-09 00:53:47.510621 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-09 00:53:47.510625 | orchestrator | Thursday 09 April 2026 00:53:08 +0000 (0:00:00.646) 0:05:16.900 ******** 2026-04-09 00:53:47.510629 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.510633 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.510636 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.510640 | orchestrator | 2026-04-09 00:53:47.510644 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-09 00:53:47.510648 | orchestrator | Thursday 09 April 2026 00:53:09 +0000 (0:00:00.622) 0:05:17.522 ******** 2026-04-09 00:53:47.510652 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.510655 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.510659 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.510663 | orchestrator | 2026-04-09 00:53:47.510667 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-09 00:53:47.510671 | orchestrator | Thursday 09 April 2026 00:53:09 +0000 (0:00:00.933) 0:05:18.455 ******** 2026-04-09 00:53:47.510674 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.510678 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.510682 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.510686 | orchestrator | 2026-04-09 00:53:47.510689 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-09 00:53:47.510693 | orchestrator | Thursday 09 April 2026 00:53:10 +0000 (0:00:00.930) 0:05:19.386 ******** 2026-04-09 00:53:47.510697 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.510701 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.510704 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.510708 | orchestrator | 2026-04-09 00:53:47.510712 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-09 00:53:47.510716 | orchestrator | Thursday 09 April 2026 00:53:11 +0000 (0:00:00.921) 0:05:20.307 ******** 2026-04-09 00:53:47.510720 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.510729 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.510733 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.510736 | orchestrator | 2026-04-09 00:53:47.510740 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-09 00:53:47.510744 | orchestrator | Thursday 09 April 2026 00:53:19 +0000 (0:00:08.176) 0:05:28.484 ******** 2026-04-09 00:53:47.510748 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.510752 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.510755 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.510759 | orchestrator | 2026-04-09 00:53:47.510763 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-09 00:53:47.510767 | orchestrator | Thursday 09 April 2026 00:53:21 +0000 (0:00:01.173) 0:05:29.657 ******** 2026-04-09 00:53:47.510771 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.510777 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.510781 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.510785 | orchestrator | 2026-04-09 00:53:47.510789 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-09 00:53:47.510794 | orchestrator | Thursday 09 April 2026 00:53:29 +0000 (0:00:08.760) 0:05:38.418 ******** 2026-04-09 00:53:47.510798 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.510802 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.510806 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.510810 | orchestrator | 2026-04-09 00:53:47.510814 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-09 00:53:47.510817 | orchestrator | Thursday 09 April 2026 00:53:33 +0000 (0:00:03.844) 0:05:42.262 ******** 2026-04-09 00:53:47.510821 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:53:47.510825 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:53:47.510829 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:53:47.510832 | orchestrator | 2026-04-09 00:53:47.510836 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-09 00:53:47.510840 | orchestrator | Thursday 09 April 2026 00:53:37 +0000 (0:00:04.100) 0:05:46.362 ******** 2026-04-09 00:53:47.510844 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510848 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510852 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510855 | orchestrator | 2026-04-09 00:53:47.510859 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-09 00:53:47.510863 | orchestrator | Thursday 09 April 2026 00:53:38 +0000 (0:00:00.669) 0:05:47.032 ******** 2026-04-09 00:53:47.510867 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510870 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510874 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510878 | orchestrator | 2026-04-09 00:53:47.510882 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-09 00:53:47.510886 | orchestrator | Thursday 09 April 2026 00:53:38 +0000 (0:00:00.339) 0:05:47.371 ******** 2026-04-09 00:53:47.510889 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510893 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510897 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510901 | orchestrator | 2026-04-09 00:53:47.510905 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-09 00:53:47.510908 | orchestrator | Thursday 09 April 2026 00:53:39 +0000 (0:00:00.341) 0:05:47.713 ******** 2026-04-09 00:53:47.510912 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510918 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510922 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510926 | orchestrator | 2026-04-09 00:53:47.510930 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-09 00:53:47.510934 | orchestrator | Thursday 09 April 2026 00:53:39 +0000 (0:00:00.332) 0:05:48.045 ******** 2026-04-09 00:53:47.510937 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510941 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510945 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510949 | orchestrator | 2026-04-09 00:53:47.510952 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-09 00:53:47.510956 | orchestrator | Thursday 09 April 2026 00:53:40 +0000 (0:00:00.651) 0:05:48.696 ******** 2026-04-09 00:53:47.510960 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:53:47.510964 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:53:47.510968 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:53:47.510971 | orchestrator | 2026-04-09 00:53:47.510975 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-09 00:53:47.510979 | orchestrator | Thursday 09 April 2026 00:53:40 +0000 (0:00:00.357) 0:05:49.054 ******** 2026-04-09 00:53:47.510983 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.510987 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.510993 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.510997 | orchestrator | 2026-04-09 00:53:47.511001 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-09 00:53:47.511004 | orchestrator | Thursday 09 April 2026 00:53:45 +0000 (0:00:04.805) 0:05:53.860 ******** 2026-04-09 00:53:47.511008 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:47.511012 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:47.511016 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:47.511033 | orchestrator | 2026-04-09 00:53:47.511037 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:53:47.511040 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-09 00:53:47.511045 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-09 00:53:47.511049 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-09 00:53:47.511053 | orchestrator | 2026-04-09 00:53:47.511056 | orchestrator | 2026-04-09 00:53:47.511060 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:53:47.511064 | orchestrator | Thursday 09 April 2026 00:53:46 +0000 (0:00:00.818) 0:05:54.679 ******** 2026-04-09 00:53:47.511068 | orchestrator | =============================================================================== 2026-04-09 00:53:47.511072 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.76s 2026-04-09 00:53:47.511075 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.18s 2026-04-09 00:53:47.511079 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.46s 2026-04-09 00:53:47.511083 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.88s 2026-04-09 00:53:47.511087 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.81s 2026-04-09 00:53:47.511090 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.69s 2026-04-09 00:53:47.511094 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.56s 2026-04-09 00:53:47.511098 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.37s 2026-04-09 00:53:47.511104 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.29s 2026-04-09 00:53:47.511108 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.10s 2026-04-09 00:53:47.511112 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.10s 2026-04-09 00:53:47.511116 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.01s 2026-04-09 00:53:47.511119 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.91s 2026-04-09 00:53:47.511123 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.86s 2026-04-09 00:53:47.511127 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.84s 2026-04-09 00:53:47.511131 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.83s 2026-04-09 00:53:47.511134 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 3.77s 2026-04-09 00:53:47.511138 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.71s 2026-04-09 00:53:47.511142 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.60s 2026-04-09 00:53:47.511146 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.57s 2026-04-09 00:53:47.511150 | orchestrator | 2026-04-09 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:50.546131 | orchestrator | 2026-04-09 00:53:50 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:53:50.549918 | orchestrator | 2026-04-09 00:53:50 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:50.552949 | orchestrator | 2026-04-09 00:53:50 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:53:50.553128 | orchestrator | 2026-04-09 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:53.589821 | orchestrator | 2026-04-09 00:53:53 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:53:53.591545 | orchestrator | 2026-04-09 00:53:53 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:53.592254 | orchestrator | 2026-04-09 00:53:53 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:53:53.592313 | orchestrator | 2026-04-09 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:56.627494 | orchestrator | 2026-04-09 00:53:56 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:53:56.628081 | orchestrator | 2026-04-09 00:53:56 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:56.630971 | orchestrator | 2026-04-09 00:53:56 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:53:56.631118 | orchestrator | 2026-04-09 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:59.666270 | orchestrator | 2026-04-09 00:53:59 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:53:59.667499 | orchestrator | 2026-04-09 00:53:59 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:53:59.668052 | orchestrator | 2026-04-09 00:53:59 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:53:59.668149 | orchestrator | 2026-04-09 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:02.703262 | orchestrator | 2026-04-09 00:54:02 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:02.703866 | orchestrator | 2026-04-09 00:54:02 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:02.705659 | orchestrator | 2026-04-09 00:54:02 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:02.705856 | orchestrator | 2026-04-09 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:05.748203 | orchestrator | 2026-04-09 00:54:05 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:05.750178 | orchestrator | 2026-04-09 00:54:05 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:05.752602 | orchestrator | 2026-04-09 00:54:05 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:05.752835 | orchestrator | 2026-04-09 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:08.791559 | orchestrator | 2026-04-09 00:54:08 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:08.792196 | orchestrator | 2026-04-09 00:54:08 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:08.793340 | orchestrator | 2026-04-09 00:54:08 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:08.793421 | orchestrator | 2026-04-09 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:11.829447 | orchestrator | 2026-04-09 00:54:11 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:11.831459 | orchestrator | 2026-04-09 00:54:11 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:11.831775 | orchestrator | 2026-04-09 00:54:11 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:11.831803 | orchestrator | 2026-04-09 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:14.944452 | orchestrator | 2026-04-09 00:54:14 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:14.944521 | orchestrator | 2026-04-09 00:54:14 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:14.944527 | orchestrator | 2026-04-09 00:54:14 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:14.944532 | orchestrator | 2026-04-09 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:18.011442 | orchestrator | 2026-04-09 00:54:18 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:18.013311 | orchestrator | 2026-04-09 00:54:18 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:18.033277 | orchestrator | 2026-04-09 00:54:18 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:18.033378 | orchestrator | 2026-04-09 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:21.065762 | orchestrator | 2026-04-09 00:54:21 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:21.068252 | orchestrator | 2026-04-09 00:54:21 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:21.071227 | orchestrator | 2026-04-09 00:54:21 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:21.071281 | orchestrator | 2026-04-09 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:24.107351 | orchestrator | 2026-04-09 00:54:24 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:24.108128 | orchestrator | 2026-04-09 00:54:24 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:24.108785 | orchestrator | 2026-04-09 00:54:24 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:24.108929 | orchestrator | 2026-04-09 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:27.160032 | orchestrator | 2026-04-09 00:54:27 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:27.163287 | orchestrator | 2026-04-09 00:54:27 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:27.167104 | orchestrator | 2026-04-09 00:54:27 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:27.167150 | orchestrator | 2026-04-09 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:30.203856 | orchestrator | 2026-04-09 00:54:30 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:30.205856 | orchestrator | 2026-04-09 00:54:30 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:30.206813 | orchestrator | 2026-04-09 00:54:30 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:30.206845 | orchestrator | 2026-04-09 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:33.253028 | orchestrator | 2026-04-09 00:54:33 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:33.254731 | orchestrator | 2026-04-09 00:54:33 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:33.257190 | orchestrator | 2026-04-09 00:54:33 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:33.257272 | orchestrator | 2026-04-09 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:36.296748 | orchestrator | 2026-04-09 00:54:36 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:36.297302 | orchestrator | 2026-04-09 00:54:36 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:36.301423 | orchestrator | 2026-04-09 00:54:36 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:36.301472 | orchestrator | 2026-04-09 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:39.354668 | orchestrator | 2026-04-09 00:54:39 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:39.356016 | orchestrator | 2026-04-09 00:54:39 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:39.358716 | orchestrator | 2026-04-09 00:54:39 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:39.358770 | orchestrator | 2026-04-09 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:42.402779 | orchestrator | 2026-04-09 00:54:42 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:42.404267 | orchestrator | 2026-04-09 00:54:42 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:42.406056 | orchestrator | 2026-04-09 00:54:42 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:42.406091 | orchestrator | 2026-04-09 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:45.453889 | orchestrator | 2026-04-09 00:54:45 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:45.456894 | orchestrator | 2026-04-09 00:54:45 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:45.460083 | orchestrator | 2026-04-09 00:54:45 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:45.460169 | orchestrator | 2026-04-09 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:48.502471 | orchestrator | 2026-04-09 00:54:48 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:48.503918 | orchestrator | 2026-04-09 00:54:48 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:48.505095 | orchestrator | 2026-04-09 00:54:48 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:48.505266 | orchestrator | 2026-04-09 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:51.559252 | orchestrator | 2026-04-09 00:54:51 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:51.561794 | orchestrator | 2026-04-09 00:54:51 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:51.563828 | orchestrator | 2026-04-09 00:54:51 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:51.564122 | orchestrator | 2026-04-09 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:54.609141 | orchestrator | 2026-04-09 00:54:54 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:54.611824 | orchestrator | 2026-04-09 00:54:54 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:54.613026 | orchestrator | 2026-04-09 00:54:54 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:54.613154 | orchestrator | 2026-04-09 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:57.662903 | orchestrator | 2026-04-09 00:54:57 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:54:57.664295 | orchestrator | 2026-04-09 00:54:57 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:54:57.665944 | orchestrator | 2026-04-09 00:54:57 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:54:57.666060 | orchestrator | 2026-04-09 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:00.716156 | orchestrator | 2026-04-09 00:55:00 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:00.717458 | orchestrator | 2026-04-09 00:55:00 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:00.719279 | orchestrator | 2026-04-09 00:55:00 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:00.719599 | orchestrator | 2026-04-09 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:03.766256 | orchestrator | 2026-04-09 00:55:03 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:03.768166 | orchestrator | 2026-04-09 00:55:03 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:03.768766 | orchestrator | 2026-04-09 00:55:03 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:03.768789 | orchestrator | 2026-04-09 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:06.819256 | orchestrator | 2026-04-09 00:55:06 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:06.821551 | orchestrator | 2026-04-09 00:55:06 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:06.824069 | orchestrator | 2026-04-09 00:55:06 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:06.824220 | orchestrator | 2026-04-09 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:09.874892 | orchestrator | 2026-04-09 00:55:09 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:09.876633 | orchestrator | 2026-04-09 00:55:09 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:09.877890 | orchestrator | 2026-04-09 00:55:09 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:09.878086 | orchestrator | 2026-04-09 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:12.920861 | orchestrator | 2026-04-09 00:55:12 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:12.922734 | orchestrator | 2026-04-09 00:55:12 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:12.924774 | orchestrator | 2026-04-09 00:55:12 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:12.925158 | orchestrator | 2026-04-09 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:15.981451 | orchestrator | 2026-04-09 00:55:15 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:15.982341 | orchestrator | 2026-04-09 00:55:15 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:15.983584 | orchestrator | 2026-04-09 00:55:15 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:15.983635 | orchestrator | 2026-04-09 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:19.034119 | orchestrator | 2026-04-09 00:55:19 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:19.036084 | orchestrator | 2026-04-09 00:55:19 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:19.038322 | orchestrator | 2026-04-09 00:55:19 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:19.038356 | orchestrator | 2026-04-09 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:22.091949 | orchestrator | 2026-04-09 00:55:22 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:22.093852 | orchestrator | 2026-04-09 00:55:22 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:22.096610 | orchestrator | 2026-04-09 00:55:22 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:22.096683 | orchestrator | 2026-04-09 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:25.142327 | orchestrator | 2026-04-09 00:55:25 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:25.144287 | orchestrator | 2026-04-09 00:55:25 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:25.145945 | orchestrator | 2026-04-09 00:55:25 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:25.146276 | orchestrator | 2026-04-09 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:28.198223 | orchestrator | 2026-04-09 00:55:28 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:28.199951 | orchestrator | 2026-04-09 00:55:28 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:28.201926 | orchestrator | 2026-04-09 00:55:28 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:28.201957 | orchestrator | 2026-04-09 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:31.241636 | orchestrator | 2026-04-09 00:55:31 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:31.243698 | orchestrator | 2026-04-09 00:55:31 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:31.245372 | orchestrator | 2026-04-09 00:55:31 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:31.245508 | orchestrator | 2026-04-09 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:34.293827 | orchestrator | 2026-04-09 00:55:34 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:34.295541 | orchestrator | 2026-04-09 00:55:34 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:34.297430 | orchestrator | 2026-04-09 00:55:34 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:34.297476 | orchestrator | 2026-04-09 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:37.343768 | orchestrator | 2026-04-09 00:55:37 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:37.350215 | orchestrator | 2026-04-09 00:55:37 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:37.355238 | orchestrator | 2026-04-09 00:55:37 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:37.355283 | orchestrator | 2026-04-09 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:40.403979 | orchestrator | 2026-04-09 00:55:40 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:40.405221 | orchestrator | 2026-04-09 00:55:40 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:40.406935 | orchestrator | 2026-04-09 00:55:40 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:40.406998 | orchestrator | 2026-04-09 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:43.451937 | orchestrator | 2026-04-09 00:55:43 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:43.453099 | orchestrator | 2026-04-09 00:55:43 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:43.455332 | orchestrator | 2026-04-09 00:55:43 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:43.455365 | orchestrator | 2026-04-09 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:46.498638 | orchestrator | 2026-04-09 00:55:46 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:46.501495 | orchestrator | 2026-04-09 00:55:46 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:46.503844 | orchestrator | 2026-04-09 00:55:46 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:46.503920 | orchestrator | 2026-04-09 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:49.551307 | orchestrator | 2026-04-09 00:55:49 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:49.553705 | orchestrator | 2026-04-09 00:55:49 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:49.556375 | orchestrator | 2026-04-09 00:55:49 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:49.556413 | orchestrator | 2026-04-09 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:52.604460 | orchestrator | 2026-04-09 00:55:52 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:52.606056 | orchestrator | 2026-04-09 00:55:52 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:52.607641 | orchestrator | 2026-04-09 00:55:52 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:52.607680 | orchestrator | 2026-04-09 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:55.648089 | orchestrator | 2026-04-09 00:55:55 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:55.650327 | orchestrator | 2026-04-09 00:55:55 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:55.652532 | orchestrator | 2026-04-09 00:55:55 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:55.652610 | orchestrator | 2026-04-09 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:58.703616 | orchestrator | 2026-04-09 00:55:58 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:55:58.705204 | orchestrator | 2026-04-09 00:55:58 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:55:58.707983 | orchestrator | 2026-04-09 00:55:58 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:55:58.708038 | orchestrator | 2026-04-09 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:01.752051 | orchestrator | 2026-04-09 00:56:01 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:56:01.754404 | orchestrator | 2026-04-09 00:56:01 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:56:01.756769 | orchestrator | 2026-04-09 00:56:01 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:01.757204 | orchestrator | 2026-04-09 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:04.804454 | orchestrator | 2026-04-09 00:56:04 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:56:04.806360 | orchestrator | 2026-04-09 00:56:04 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:56:04.807945 | orchestrator | 2026-04-09 00:56:04 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:04.807986 | orchestrator | 2026-04-09 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:07.846711 | orchestrator | 2026-04-09 00:56:07 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:56:07.849632 | orchestrator | 2026-04-09 00:56:07 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:56:07.850762 | orchestrator | 2026-04-09 00:56:07 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:07.850939 | orchestrator | 2026-04-09 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:10.892954 | orchestrator | 2026-04-09 00:56:10 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:56:10.894658 | orchestrator | 2026-04-09 00:56:10 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state STARTED 2026-04-09 00:56:10.895821 | orchestrator | 2026-04-09 00:56:10 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:10.895889 | orchestrator | 2026-04-09 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:13.946384 | orchestrator | 2026-04-09 00:56:13 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:56:13.948421 | orchestrator | 2026-04-09 00:56:13 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:13.957125 | orchestrator | 2026-04-09 00:56:13 | INFO  | Task bffcce9e-6e37-44dc-851a-f16d03ebc217 is in state SUCCESS 2026-04-09 00:56:13.958184 | orchestrator | 2026-04-09 00:56:13.958228 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:56:13.958235 | orchestrator | 2.16.14 2026-04-09 00:56:13.958241 | orchestrator | 2026-04-09 00:56:13.958246 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-09 00:56:13.958252 | orchestrator | 2026-04-09 00:56:13.958257 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 00:56:13.958263 | orchestrator | Thursday 09 April 2026 00:45:33 +0000 (0:00:00.901) 0:00:00.901 ******** 2026-04-09 00:56:13.958269 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.958275 | orchestrator | 2026-04-09 00:56:13.958281 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 00:56:13.958286 | orchestrator | Thursday 09 April 2026 00:45:35 +0000 (0:00:01.595) 0:00:02.497 ******** 2026-04-09 00:56:13.958291 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.958297 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.958302 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.958307 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.958312 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.958317 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.958323 | orchestrator | 2026-04-09 00:56:13.958328 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 00:56:13.958348 | orchestrator | Thursday 09 April 2026 00:45:37 +0000 (0:00:01.888) 0:00:04.386 ******** 2026-04-09 00:56:13.958353 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.958358 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.958363 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.958368 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.958373 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.958378 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.958383 | orchestrator | 2026-04-09 00:56:13.958388 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 00:56:13.958394 | orchestrator | Thursday 09 April 2026 00:45:37 +0000 (0:00:00.732) 0:00:05.119 ******** 2026-04-09 00:56:13.958399 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.958403 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.958408 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.958414 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.958418 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.958423 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.958428 | orchestrator | 2026-04-09 00:56:13.958434 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 00:56:13.958439 | orchestrator | Thursday 09 April 2026 00:45:39 +0000 (0:00:01.552) 0:00:06.671 ******** 2026-04-09 00:56:13.958444 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.958449 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.958454 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.958459 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.958464 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.958469 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.958474 | orchestrator | 2026-04-09 00:56:13.958479 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 00:56:13.958484 | orchestrator | Thursday 09 April 2026 00:45:40 +0000 (0:00:01.264) 0:00:07.936 ******** 2026-04-09 00:56:13.958489 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.958494 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.958499 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.958504 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.958510 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.958515 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.958520 | orchestrator | 2026-04-09 00:56:13.958525 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 00:56:13.958530 | orchestrator | Thursday 09 April 2026 00:45:41 +0000 (0:00:01.032) 0:00:08.969 ******** 2026-04-09 00:56:13.958535 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.958540 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.958545 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.958550 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.958555 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.958561 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.958565 | orchestrator | 2026-04-09 00:56:13.958571 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 00:56:13.958576 | orchestrator | Thursday 09 April 2026 00:45:43 +0000 (0:00:01.344) 0:00:10.313 ******** 2026-04-09 00:56:13.958581 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.958587 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.958592 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.958660 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.958667 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.958673 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.958678 | orchestrator | 2026-04-09 00:56:13.958683 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 00:56:13.958688 | orchestrator | Thursday 09 April 2026 00:45:43 +0000 (0:00:00.635) 0:00:10.948 ******** 2026-04-09 00:56:13.958809 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.958825 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.958865 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.958876 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.958887 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.958890 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.958893 | orchestrator | 2026-04-09 00:56:13.958896 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 00:56:13.958900 | orchestrator | Thursday 09 April 2026 00:45:44 +0000 (0:00:00.886) 0:00:11.835 ******** 2026-04-09 00:56:13.958903 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:56:13.958906 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:56:13.958909 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:56:13.958912 | orchestrator | 2026-04-09 00:56:13.958915 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 00:56:13.958918 | orchestrator | Thursday 09 April 2026 00:45:45 +0000 (0:00:00.740) 0:00:12.576 ******** 2026-04-09 00:56:13.958921 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.958925 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.958928 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.958931 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.958944 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.958950 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.958955 | orchestrator | 2026-04-09 00:56:13.958960 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 00:56:13.958965 | orchestrator | Thursday 09 April 2026 00:45:46 +0000 (0:00:01.125) 0:00:13.701 ******** 2026-04-09 00:56:13.958970 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:56:13.958975 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:56:13.958981 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:56:13.958986 | orchestrator | 2026-04-09 00:56:13.958991 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 00:56:13.958996 | orchestrator | Thursday 09 April 2026 00:45:49 +0000 (0:00:02.831) 0:00:16.533 ******** 2026-04-09 00:56:13.959001 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:56:13.959007 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:56:13.959012 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:56:13.959017 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959022 | orchestrator | 2026-04-09 00:56:13.959027 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 00:56:13.959032 | orchestrator | Thursday 09 April 2026 00:45:50 +0000 (0:00:00.968) 0:00:17.501 ******** 2026-04-09 00:56:13.959037 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.959045 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.959050 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.959055 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959060 | orchestrator | 2026-04-09 00:56:13.959066 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 00:56:13.959071 | orchestrator | Thursday 09 April 2026 00:45:52 +0000 (0:00:02.439) 0:00:19.941 ******** 2026-04-09 00:56:13.959078 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.959089 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.959095 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.959104 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959109 | orchestrator | 2026-04-09 00:56:13.959114 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 00:56:13.959119 | orchestrator | Thursday 09 April 2026 00:45:52 +0000 (0:00:00.258) 0:00:20.199 ******** 2026-04-09 00:56:13.959131 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 00:45:47.333522', 'end': '2026-04-09 00:45:47.423601', 'delta': '0:00:00.090079', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.959138 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 00:45:48.001955', 'end': '2026-04-09 00:45:48.092955', 'delta': '0:00:00.091000', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.959144 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 00:45:48.841222', 'end': '2026-04-09 00:45:48.931477', 'delta': '0:00:00.090255', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.959188 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959195 | orchestrator | 2026-04-09 00:56:13.959200 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 00:56:13.959205 | orchestrator | Thursday 09 April 2026 00:45:53 +0000 (0:00:00.655) 0:00:20.855 ******** 2026-04-09 00:56:13.959215 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.959220 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.959226 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.959231 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.959432 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.959477 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.959483 | orchestrator | 2026-04-09 00:56:13.959489 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 00:56:13.959494 | orchestrator | Thursday 09 April 2026 00:45:55 +0000 (0:00:01.645) 0:00:22.501 ******** 2026-04-09 00:56:13.959499 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.959504 | orchestrator | 2026-04-09 00:56:13.959509 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 00:56:13.959514 | orchestrator | Thursday 09 April 2026 00:45:56 +0000 (0:00:00.901) 0:00:23.403 ******** 2026-04-09 00:56:13.959519 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959525 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.959530 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.959536 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.959541 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.959547 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.959552 | orchestrator | 2026-04-09 00:56:13.959557 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 00:56:13.959563 | orchestrator | Thursday 09 April 2026 00:45:57 +0000 (0:00:01.287) 0:00:24.690 ******** 2026-04-09 00:56:13.959568 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959573 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.959578 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.959582 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.959586 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.959591 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.959595 | orchestrator | 2026-04-09 00:56:13.959600 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:56:13.959605 | orchestrator | Thursday 09 April 2026 00:45:58 +0000 (0:00:01.065) 0:00:25.756 ******** 2026-04-09 00:56:13.959610 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959614 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.959618 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.959623 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.959632 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.959637 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.959641 | orchestrator | 2026-04-09 00:56:13.959646 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 00:56:13.959651 | orchestrator | Thursday 09 April 2026 00:45:59 +0000 (0:00:00.934) 0:00:26.691 ******** 2026-04-09 00:56:13.959656 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959660 | orchestrator | 2026-04-09 00:56:13.959667 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 00:56:13.959672 | orchestrator | Thursday 09 April 2026 00:45:59 +0000 (0:00:00.100) 0:00:26.791 ******** 2026-04-09 00:56:13.959678 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959683 | orchestrator | 2026-04-09 00:56:13.959688 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:56:13.959694 | orchestrator | Thursday 09 April 2026 00:45:59 +0000 (0:00:00.184) 0:00:26.975 ******** 2026-04-09 00:56:13.959700 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.959705 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959721 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.959726 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.959732 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.959737 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.959743 | orchestrator | 2026-04-09 00:56:13.959769 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 00:56:13.959775 | orchestrator | Thursday 09 April 2026 00:46:00 +0000 (0:00:00.569) 0:00:27.545 ******** 2026-04-09 00:56:13.959786 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959792 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.959797 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.959802 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.959808 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.959813 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.959819 | orchestrator | 2026-04-09 00:56:13.959824 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 00:56:13.959829 | orchestrator | Thursday 09 April 2026 00:46:01 +0000 (0:00:01.225) 0:00:28.770 ******** 2026-04-09 00:56:13.959869 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959875 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.959880 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.959886 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.959891 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.959896 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.959901 | orchestrator | 2026-04-09 00:56:13.959906 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 00:56:13.959912 | orchestrator | Thursday 09 April 2026 00:46:02 +0000 (0:00:00.686) 0:00:29.457 ******** 2026-04-09 00:56:13.959917 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959922 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.959927 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.959932 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.959937 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.959942 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.959947 | orchestrator | 2026-04-09 00:56:13.959952 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 00:56:13.959957 | orchestrator | Thursday 09 April 2026 00:46:03 +0000 (0:00:00.954) 0:00:30.411 ******** 2026-04-09 00:56:13.959963 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.959968 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.959973 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.959978 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.959983 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.959987 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.960216 | orchestrator | 2026-04-09 00:56:13.960229 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 00:56:13.960234 | orchestrator | Thursday 09 April 2026 00:46:04 +0000 (0:00:01.378) 0:00:31.789 ******** 2026-04-09 00:56:13.960240 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.960245 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.960250 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.960256 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.960261 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.960267 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.960272 | orchestrator | 2026-04-09 00:56:13.960277 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 00:56:13.960283 | orchestrator | Thursday 09 April 2026 00:46:06 +0000 (0:00:02.173) 0:00:33.962 ******** 2026-04-09 00:56:13.960288 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.960293 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.960299 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.960304 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.960310 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.960316 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.960321 | orchestrator | 2026-04-09 00:56:13.960327 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 00:56:13.960333 | orchestrator | Thursday 09 April 2026 00:46:07 +0000 (0:00:01.118) 0:00:35.081 ******** 2026-04-09 00:56:13.960340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042', 'scsi-SQEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part1', 'scsi-SQEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part14', 'scsi-SQEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part15', 'scsi-SQEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part16', 'scsi-SQEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa', 'scsi-SQEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960561 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.960568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960612 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.960618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8', 'scsi-SQEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0ecce907--b02d--5708--a2ce--6926a186870f-osd--block--0ecce907--b02d--5708--a2ce--6926a186870f', 'dm-uuid-LVM-MwHp97WqxiAKjrPzM1rqjGGR9t0YLZOSpWqFrFJnKVC7KVZHoIWS487LC2ojJlF4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b063fe53--4e4e--551f--8a45--331436b07c8b-osd--block--b063fe53--4e4e--551f--8a45--331436b07c8b', 'dm-uuid-LVM-oidJBbkg2nUZzbFblhIzA8HRXCMWuowncfNejT9B0KxURmhyfyY4upG4oDblHBtU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960698 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.960703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0ecce907--b02d--5708--a2ce--6926a186870f-osd--block--0ecce907--b02d--5708--a2ce--6926a186870f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uiJAMo-y50f-8GAZ-AMdd-NNz0-bt1F-FslSBh', 'scsi-0QEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7', 'scsi-SQEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b063fe53--4e4e--551f--8a45--331436b07c8b-osd--block--b063fe53--4e4e--551f--8a45--331436b07c8b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-99esnd-k3Yc-WLEz-KCyI-RcuL-Idv2-dz5HD0', 'scsi-0QEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1', 'scsi-SQEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a', 'scsi-SQEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960795 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.960801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa87c95d--d840--5309--8296--5c77234dd7e9-osd--block--fa87c95d--d840--5309--8296--5c77234dd7e9', 'dm-uuid-LVM-XysAVcqS16jjDfkbWOU4ZClUSjuwTp81wvaiLa0cF3uDbvQpuSYWqCzba7pjNHyh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4752f0c--8dc2--56ff--98d4--03c08b41fecd-osd--block--e4752f0c--8dc2--56ff--98d4--03c08b41fecd', 'dm-uuid-LVM-soYDyklFCUZiaWxHAKv86XAnIxxqPtyqUp0438blgymvNmn5pe4IrJcTciV01Wa3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fa87c95d--d840--5309--8296--5c77234dd7e9-osd--block--fa87c95d--d840--5309--8296--5c77234dd7e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Fu4yKb-0Kk3-b0rT-0P6A-kNfI-wm1i-82Giss', 'scsi-0QEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b', 'scsi-SQEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e4752f0c--8dc2--56ff--98d4--03c08b41fecd-osd--block--e4752f0c--8dc2--56ff--98d4--03c08b41fecd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZTDjeK-2HwQ-AeGM-7YlK-G32T-0cCX-cAtmDf', 'scsi-0QEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48', 'scsi-SQEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec', 'scsi-SQEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960918 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.960923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e77990f9--27fa--58e8--a0b8--915245e923bd-osd--block--e77990f9--27fa--58e8--a0b8--915245e923bd', 'dm-uuid-LVM-uDhV6caHL211nYXtqSdoo3op85zXuT4LC4DceUxfDV1jL83Kf3awHkqz08dj0ZJi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6c03351d--b2bb--55a5--9b19--7d0118202256-osd--block--6c03351d--b2bb--55a5--9b19--7d0118202256', 'dm-uuid-LVM-4S4msUcaLagRA6mssTeNi6WZstmM0v6Px8dOP78xYQMY6K7swxucxsNeXVpx2NZm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:13.960987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part1', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part14', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part15', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part16', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e77990f9--27fa--58e8--a0b8--915245e923bd-osd--block--e77990f9--27fa--58e8--a0b8--915245e923bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3EmKSY-AkaE-j80P-5gp5-pF4R-nSaf-PH5I5E', 'scsi-0QEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3', 'scsi-SQEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.960997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6c03351d--b2bb--55a5--9b19--7d0118202256-osd--block--6c03351d--b2bb--55a5--9b19--7d0118202256'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HedMRi-I0PG-09i6-bTXb-lsmq-3ePu-22hUR5', 'scsi-0QEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645', 'scsi-SQEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.961003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47', 'scsi-SQEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.961015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:13.961019 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.961022 | orchestrator | 2026-04-09 00:56:13.961025 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 00:56:13.961031 | orchestrator | Thursday 09 April 2026 00:46:09 +0000 (0:00:01.823) 0:00:36.905 ******** 2026-04-09 00:56:13.961035 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961038 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961042 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961045 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961050 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961053 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961059 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961064 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961070 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa', 'scsi-SQEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_dec93e02-284b-4105-b505-be63281832aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961077 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961084 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961087 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961090 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961094 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961097 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961102 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961349 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961358 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961368 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042', 'scsi-SQEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part1', 'scsi-SQEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part14', 'scsi-SQEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part15', 'scsi-SQEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part16', 'scsi-SQEMU_QEMU_HARDDISK_60227d82-9bd6-4e9b-88ba-e02146459042-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961373 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.961377 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961393 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961396 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961400 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.961403 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961406 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961410 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961415 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961428 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961432 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961436 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8', 'scsi-SQEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_750703c1-2485-4dbe-94f0-f3c1f99dc2e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961441 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0ecce907--b02d--5708--a2ce--6926a186870f-osd--block--0ecce907--b02d--5708--a2ce--6926a186870f', 'dm-uuid-LVM-MwHp97WqxiAKjrPzM1rqjGGR9t0YLZOSpWqFrFJnKVC7KVZHoIWS487LC2ojJlF4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961461 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b063fe53--4e4e--551f--8a45--331436b07c8b-osd--block--b063fe53--4e4e--551f--8a45--331436b07c8b', 'dm-uuid-LVM-oidJBbkg2nUZzbFblhIzA8HRXCMWuowncfNejT9B0KxURmhyfyY4upG4oDblHBtU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961472 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961478 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.961481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0ecce907--b02d--5708--a2ce--6926a186870f-osd--block--0ecce907--b02d--5708--a2ce--6926a186870f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uiJAMo-y50f-8GAZ-AMdd-NNz0-bt1F-FslSBh', 'scsi-0QEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7', 'scsi-SQEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b063fe53--4e4e--551f--8a45--331436b07c8b-osd--block--b063fe53--4e4e--551f--8a45--331436b07c8b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-99esnd-k3Yc-WLEz-KCyI-RcuL-Idv2-dz5HD0', 'scsi-0QEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1', 'scsi-SQEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961537 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa87c95d--d840--5309--8296--5c77234dd7e9-osd--block--fa87c95d--d840--5309--8296--5c77234dd7e9', 'dm-uuid-LVM-XysAVcqS16jjDfkbWOU4ZClUSjuwTp81wvaiLa0cF3uDbvQpuSYWqCzba7pjNHyh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a', 'scsi-SQEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961564 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4752f0c--8dc2--56ff--98d4--03c08b41fecd-osd--block--e4752f0c--8dc2--56ff--98d4--03c08b41fecd', 'dm-uuid-LVM-soYDyklFCUZiaWxHAKv86XAnIxxqPtyqUp0438blgymvNmn5pe4IrJcTciV01Wa3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961572 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961582 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.961587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961595 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961604 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961641 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e77990f9--27fa--58e8--a0b8--915245e923bd-osd--block--e77990f9--27fa--58e8--a0b8--915245e923bd', 'dm-uuid-LVM-uDhV6caHL211nYXtqSdoo3op85zXuT4LC4DceUxfDV1jL83Kf3awHkqz08dj0ZJi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961646 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6c03351d--b2bb--55a5--9b19--7d0118202256-osd--block--6c03351d--b2bb--55a5--9b19--7d0118202256', 'dm-uuid-LVM-4S4msUcaLagRA6mssTeNi6WZstmM0v6Px8dOP78xYQMY6K7swxucxsNeXVpx2NZm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961673 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961678 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961683 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fa87c95d--d840--5309--8296--5c77234dd7e9-osd--block--fa87c95d--d840--5309--8296--5c77234dd7e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Fu4yKb-0Kk3-b0rT-0P6A-kNfI-wm1i-82Giss', 'scsi-0QEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b', 'scsi-SQEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961700 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e4752f0c--8dc2--56ff--98d4--03c08b41fecd-osd--block--e4752f0c--8dc2--56ff--98d4--03c08b41fecd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZTDjeK-2HwQ-AeGM-7YlK-G32T-0cCX-cAtmDf', 'scsi-0QEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48', 'scsi-SQEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961717 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961728 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec', 'scsi-SQEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961808 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part1', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part14', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part15', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part16', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961818 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.961824 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e77990f9--27fa--58e8--a0b8--915245e923bd-osd--block--e77990f9--27fa--58e8--a0b8--915245e923bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3EmKSY-AkaE-j80P-5gp5-pF4R-nSaf-PH5I5E', 'scsi-0QEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3', 'scsi-SQEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961830 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6c03351d--b2bb--55a5--9b19--7d0118202256-osd--block--6c03351d--b2bb--55a5--9b19--7d0118202256'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HedMRi-I0PG-09i6-bTXb-lsmq-3ePu-22hUR5', 'scsi-0QEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645', 'scsi-SQEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47', 'scsi-SQEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:13.961982 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.961987 | orchestrator | 2026-04-09 00:56:13.961992 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 00:56:13.961997 | orchestrator | Thursday 09 April 2026 00:46:11 +0000 (0:00:01.822) 0:00:38.727 ******** 2026-04-09 00:56:13.962047 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.962056 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.962062 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.962067 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.962072 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.962078 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.962083 | orchestrator | 2026-04-09 00:56:13.962089 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 00:56:13.962094 | orchestrator | Thursday 09 April 2026 00:46:13 +0000 (0:00:02.056) 0:00:40.784 ******** 2026-04-09 00:56:13.962099 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.962105 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.962110 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.962115 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.962121 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.962126 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.962131 | orchestrator | 2026-04-09 00:56:13.962137 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:56:13.962142 | orchestrator | Thursday 09 April 2026 00:46:14 +0000 (0:00:01.047) 0:00:41.832 ******** 2026-04-09 00:56:13.962147 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.962152 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.962157 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.962162 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.962168 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.962173 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.962178 | orchestrator | 2026-04-09 00:56:13.962183 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:56:13.962188 | orchestrator | Thursday 09 April 2026 00:46:15 +0000 (0:00:01.415) 0:00:43.248 ******** 2026-04-09 00:56:13.962199 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.962204 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.962209 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.962214 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.962220 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.962226 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.962231 | orchestrator | 2026-04-09 00:56:13.962236 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:56:13.962241 | orchestrator | Thursday 09 April 2026 00:46:16 +0000 (0:00:00.924) 0:00:44.173 ******** 2026-04-09 00:56:13.962246 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.962251 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.962256 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.962260 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.962265 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.962270 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.962275 | orchestrator | 2026-04-09 00:56:13.962308 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:56:13.962315 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:01.454) 0:00:45.628 ******** 2026-04-09 00:56:13.962320 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.962325 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.962330 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.962336 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.962340 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.962346 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.962351 | orchestrator | 2026-04-09 00:56:13.962356 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 00:56:13.962435 | orchestrator | Thursday 09 April 2026 00:46:19 +0000 (0:00:01.217) 0:00:46.846 ******** 2026-04-09 00:56:13.962588 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:56:13.962594 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 00:56:13.962599 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-09 00:56:13.962604 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-09 00:56:13.962609 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 00:56:13.962615 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 00:56:13.962620 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-09 00:56:13.962625 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 00:56:13.962631 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 00:56:13.962636 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 00:56:13.962641 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 00:56:13.962646 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 00:56:13.962651 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 00:56:13.962656 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 00:56:13.962661 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-09 00:56:13.962667 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 00:56:13.962676 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 00:56:13.962681 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 00:56:13.962686 | orchestrator | 2026-04-09 00:56:13.962692 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 00:56:13.962697 | orchestrator | Thursday 09 April 2026 00:46:23 +0000 (0:00:04.303) 0:00:51.150 ******** 2026-04-09 00:56:13.962702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:56:13.962707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:56:13.962712 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:56:13.962723 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.962729 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 00:56:13.962734 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 00:56:13.962739 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 00:56:13.962744 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.962749 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 00:56:13.962755 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 00:56:13.962776 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 00:56:13.962800 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.962805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:56:13.962811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:56:13.962816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:56:13.962821 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.962826 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 00:56:13.962843 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 00:56:13.962848 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 00:56:13.962853 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.962858 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 00:56:13.962863 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 00:56:13.962868 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 00:56:13.962873 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.962878 | orchestrator | 2026-04-09 00:56:13.962884 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 00:56:13.962889 | orchestrator | Thursday 09 April 2026 00:46:25 +0000 (0:00:01.318) 0:00:52.469 ******** 2026-04-09 00:56:13.962895 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.962900 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.962905 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.962911 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.962959 | orchestrator | 2026-04-09 00:56:13.962965 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 00:56:13.962972 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:00.992) 0:00:53.462 ******** 2026-04-09 00:56:13.962977 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.962982 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.962987 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.962993 | orchestrator | 2026-04-09 00:56:13.962998 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 00:56:13.963003 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:00.293) 0:00:53.755 ******** 2026-04-09 00:56:13.963009 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.963014 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.963020 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.963025 | orchestrator | 2026-04-09 00:56:13.963030 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 00:56:13.963035 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:00.354) 0:00:54.110 ******** 2026-04-09 00:56:13.963041 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.963046 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.963051 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.963056 | orchestrator | 2026-04-09 00:56:13.963061 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 00:56:13.963066 | orchestrator | Thursday 09 April 2026 00:46:27 +0000 (0:00:00.352) 0:00:54.463 ******** 2026-04-09 00:56:13.963071 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.963173 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.963182 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.963188 | orchestrator | 2026-04-09 00:56:13.963193 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 00:56:13.963198 | orchestrator | Thursday 09 April 2026 00:46:27 +0000 (0:00:00.765) 0:00:55.228 ******** 2026-04-09 00:56:13.963204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.963209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.963214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.963220 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.963225 | orchestrator | 2026-04-09 00:56:13.963230 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 00:56:13.963236 | orchestrator | Thursday 09 April 2026 00:46:28 +0000 (0:00:00.444) 0:00:55.672 ******** 2026-04-09 00:56:13.963241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.963247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.963252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.963258 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.963263 | orchestrator | 2026-04-09 00:56:13.963269 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 00:56:13.963277 | orchestrator | Thursday 09 April 2026 00:46:28 +0000 (0:00:00.312) 0:00:55.984 ******** 2026-04-09 00:56:13.963283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.963288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.963292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.963298 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.963304 | orchestrator | 2026-04-09 00:56:13.963309 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 00:56:13.963315 | orchestrator | Thursday 09 April 2026 00:46:29 +0000 (0:00:00.320) 0:00:56.305 ******** 2026-04-09 00:56:13.963320 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.963325 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.963330 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.963335 | orchestrator | 2026-04-09 00:56:13.963340 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 00:56:13.963346 | orchestrator | Thursday 09 April 2026 00:46:29 +0000 (0:00:00.265) 0:00:56.570 ******** 2026-04-09 00:56:13.963351 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:56:13.963356 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 00:56:13.963361 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 00:56:13.963366 | orchestrator | 2026-04-09 00:56:13.963414 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 00:56:13.963422 | orchestrator | Thursday 09 April 2026 00:46:29 +0000 (0:00:00.628) 0:00:57.200 ******** 2026-04-09 00:56:13.963427 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:56:13.963432 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:56:13.963438 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:56:13.963443 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 00:56:13.963449 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:56:13.963454 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:56:13.963460 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:56:13.963465 | orchestrator | 2026-04-09 00:56:13.963470 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 00:56:13.963475 | orchestrator | Thursday 09 April 2026 00:46:30 +0000 (0:00:00.968) 0:00:58.168 ******** 2026-04-09 00:56:13.963485 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:56:13.963490 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:56:13.963496 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:56:13.963501 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-09 00:56:13.963506 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:56:13.963511 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:56:13.963517 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:56:13.963522 | orchestrator | 2026-04-09 00:56:13.963527 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:56:13.963532 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:01.666) 0:00:59.835 ******** 2026-04-09 00:56:13.963538 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.963544 | orchestrator | 2026-04-09 00:56:13.963549 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:56:13.963554 | orchestrator | Thursday 09 April 2026 00:46:33 +0000 (0:00:01.074) 0:01:00.909 ******** 2026-04-09 00:56:13.963559 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.963564 | orchestrator | 2026-04-09 00:56:13.963569 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:56:13.963574 | orchestrator | Thursday 09 April 2026 00:46:35 +0000 (0:00:01.918) 0:01:02.828 ******** 2026-04-09 00:56:13.963579 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.963584 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.963590 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.963595 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.963601 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.963606 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.963611 | orchestrator | 2026-04-09 00:56:13.963616 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:56:13.963622 | orchestrator | Thursday 09 April 2026 00:46:36 +0000 (0:00:01.174) 0:01:04.002 ******** 2026-04-09 00:56:13.963627 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.963632 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.963637 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.963643 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.963648 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.963653 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.963658 | orchestrator | 2026-04-09 00:56:13.963664 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:56:13.963669 | orchestrator | Thursday 09 April 2026 00:46:38 +0000 (0:00:01.607) 0:01:05.610 ******** 2026-04-09 00:56:13.963673 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.963678 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.963682 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.963687 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.963691 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.963700 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.963705 | orchestrator | 2026-04-09 00:56:13.963709 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:56:13.963714 | orchestrator | Thursday 09 April 2026 00:46:39 +0000 (0:00:01.174) 0:01:06.784 ******** 2026-04-09 00:56:13.963720 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.963725 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.963731 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.963741 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.963746 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.963751 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.963756 | orchestrator | 2026-04-09 00:56:13.963773 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:56:13.963779 | orchestrator | Thursday 09 April 2026 00:46:41 +0000 (0:00:01.561) 0:01:08.347 ******** 2026-04-09 00:56:13.963785 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.963790 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.963795 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.963800 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.963805 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.963811 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.963815 | orchestrator | 2026-04-09 00:56:13.963820 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:56:13.963864 | orchestrator | Thursday 09 April 2026 00:46:41 +0000 (0:00:00.797) 0:01:09.144 ******** 2026-04-09 00:56:13.963871 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.963876 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.963882 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.963887 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.963893 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.963898 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.963903 | orchestrator | 2026-04-09 00:56:13.963909 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:56:13.963914 | orchestrator | Thursday 09 April 2026 00:46:42 +0000 (0:00:00.809) 0:01:09.954 ******** 2026-04-09 00:56:13.963919 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.963924 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.963929 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.963934 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.963939 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.963945 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.963950 | orchestrator | 2026-04-09 00:56:13.963955 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:56:13.963960 | orchestrator | Thursday 09 April 2026 00:46:43 +0000 (0:00:00.639) 0:01:10.593 ******** 2026-04-09 00:56:13.963965 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.963970 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.963975 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.963980 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.963985 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.963990 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.963996 | orchestrator | 2026-04-09 00:56:13.964001 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:56:13.964006 | orchestrator | Thursday 09 April 2026 00:46:44 +0000 (0:00:01.277) 0:01:11.871 ******** 2026-04-09 00:56:13.964011 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.964017 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.964022 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.964027 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.964032 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.964037 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.964042 | orchestrator | 2026-04-09 00:56:13.964047 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:56:13.964053 | orchestrator | Thursday 09 April 2026 00:46:45 +0000 (0:00:01.003) 0:01:12.875 ******** 2026-04-09 00:56:13.964058 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.964064 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.964070 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.964075 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.964080 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.964086 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.964091 | orchestrator | 2026-04-09 00:56:13.964097 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:56:13.964108 | orchestrator | Thursday 09 April 2026 00:46:46 +0000 (0:00:00.897) 0:01:13.772 ******** 2026-04-09 00:56:13.964113 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.964119 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.964124 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.964129 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.964135 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.964140 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.964145 | orchestrator | 2026-04-09 00:56:13.964150 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:56:13.964155 | orchestrator | Thursday 09 April 2026 00:46:47 +0000 (0:00:00.777) 0:01:14.550 ******** 2026-04-09 00:56:13.964160 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.964165 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.964170 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.964176 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.964181 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.964187 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.964192 | orchestrator | 2026-04-09 00:56:13.964197 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:56:13.964202 | orchestrator | Thursday 09 April 2026 00:46:48 +0000 (0:00:01.448) 0:01:15.999 ******** 2026-04-09 00:56:13.964208 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.964213 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.964219 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.964224 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.964229 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.964235 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.964240 | orchestrator | 2026-04-09 00:56:13.964245 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:56:13.964250 | orchestrator | Thursday 09 April 2026 00:46:49 +0000 (0:00:01.117) 0:01:17.116 ******** 2026-04-09 00:56:13.964256 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.964261 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.964267 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.964271 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.964280 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.964285 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.964290 | orchestrator | 2026-04-09 00:56:13.964295 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:56:13.964300 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:01.049) 0:01:18.166 ******** 2026-04-09 00:56:13.964306 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.964311 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.964316 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.964322 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.964327 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.964332 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.964337 | orchestrator | 2026-04-09 00:56:13.964342 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:56:13.964347 | orchestrator | Thursday 09 April 2026 00:46:51 +0000 (0:00:00.663) 0:01:18.830 ******** 2026-04-09 00:56:13.964352 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.964357 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.964362 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.964367 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.964373 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.964378 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.964383 | orchestrator | 2026-04-09 00:56:13.964410 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:56:13.964417 | orchestrator | Thursday 09 April 2026 00:46:52 +0000 (0:00:00.784) 0:01:19.615 ******** 2026-04-09 00:56:13.964423 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.964428 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.964438 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.964443 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.964448 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.964453 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.964458 | orchestrator | 2026-04-09 00:56:13.964464 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:56:13.964470 | orchestrator | Thursday 09 April 2026 00:46:52 +0000 (0:00:00.543) 0:01:20.158 ******** 2026-04-09 00:56:13.964474 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.964479 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.964485 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.964490 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.964495 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.964500 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.964505 | orchestrator | 2026-04-09 00:56:13.964510 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:56:13.964515 | orchestrator | Thursday 09 April 2026 00:46:53 +0000 (0:00:00.928) 0:01:21.087 ******** 2026-04-09 00:56:13.964521 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.964526 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.964531 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.964536 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.964541 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.964546 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.964551 | orchestrator | 2026-04-09 00:56:13.964556 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 00:56:13.964561 | orchestrator | Thursday 09 April 2026 00:46:55 +0000 (0:00:01.268) 0:01:22.356 ******** 2026-04-09 00:56:13.964567 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.964572 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.964576 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.964582 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.964588 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.964593 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.964598 | orchestrator | 2026-04-09 00:56:13.964603 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 00:56:13.964608 | orchestrator | Thursday 09 April 2026 00:46:56 +0000 (0:00:01.591) 0:01:23.947 ******** 2026-04-09 00:56:13.964613 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.964618 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.964623 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.964629 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.964634 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.964639 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.964645 | orchestrator | 2026-04-09 00:56:13.964650 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 00:56:13.964655 | orchestrator | Thursday 09 April 2026 00:46:59 +0000 (0:00:02.389) 0:01:26.336 ******** 2026-04-09 00:56:13.964660 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.964667 | orchestrator | 2026-04-09 00:56:13.964672 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 00:56:13.964677 | orchestrator | Thursday 09 April 2026 00:47:00 +0000 (0:00:00.990) 0:01:27.327 ******** 2026-04-09 00:56:13.964682 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.964687 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.964693 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.964698 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.964703 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.964708 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.964713 | orchestrator | 2026-04-09 00:56:13.964718 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 00:56:13.964723 | orchestrator | Thursday 09 April 2026 00:47:00 +0000 (0:00:00.508) 0:01:27.836 ******** 2026-04-09 00:56:13.964735 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.964741 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.964746 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.964751 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.964756 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.964762 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.964767 | orchestrator | 2026-04-09 00:56:13.964772 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 00:56:13.964777 | orchestrator | Thursday 09 April 2026 00:47:01 +0000 (0:00:00.632) 0:01:28.468 ******** 2026-04-09 00:56:13.964782 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:56:13.964790 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:56:13.964796 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:56:13.964801 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:56:13.964806 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:56:13.964811 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:56:13.964816 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:56:13.964822 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:56:13.964827 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:56:13.964844 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:56:13.964849 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:56:13.964873 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:56:13.964878 | orchestrator | 2026-04-09 00:56:13.964884 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 00:56:13.964889 | orchestrator | Thursday 09 April 2026 00:47:02 +0000 (0:00:01.092) 0:01:29.560 ******** 2026-04-09 00:56:13.964894 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.964899 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.964904 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.964909 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.964914 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.964920 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.964925 | orchestrator | 2026-04-09 00:56:13.964930 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 00:56:13.964936 | orchestrator | Thursday 09 April 2026 00:47:03 +0000 (0:00:00.948) 0:01:30.509 ******** 2026-04-09 00:56:13.964941 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.964946 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.964951 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.964956 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.964962 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.964967 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.964972 | orchestrator | 2026-04-09 00:56:13.964977 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 00:56:13.964982 | orchestrator | Thursday 09 April 2026 00:47:03 +0000 (0:00:00.550) 0:01:31.059 ******** 2026-04-09 00:56:13.964988 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.964993 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.964998 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965004 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965009 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965014 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965025 | orchestrator | 2026-04-09 00:56:13.965031 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 00:56:13.965036 | orchestrator | Thursday 09 April 2026 00:47:04 +0000 (0:00:00.762) 0:01:31.822 ******** 2026-04-09 00:56:13.965041 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965046 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965052 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965057 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965062 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965067 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965073 | orchestrator | 2026-04-09 00:56:13.965078 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 00:56:13.965083 | orchestrator | Thursday 09 April 2026 00:47:05 +0000 (0:00:00.745) 0:01:32.567 ******** 2026-04-09 00:56:13.965088 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.965093 | orchestrator | 2026-04-09 00:56:13.965098 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 00:56:13.965103 | orchestrator | Thursday 09 April 2026 00:47:06 +0000 (0:00:01.157) 0:01:33.725 ******** 2026-04-09 00:56:13.965109 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.965113 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.965119 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.965124 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.965129 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.965135 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.965140 | orchestrator | 2026-04-09 00:56:13.965145 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 00:56:13.965150 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:01:26.187) 0:02:59.912 ******** 2026-04-09 00:56:13.965155 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:56:13.965160 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:56:13.965165 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:56:13.965170 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965175 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:56:13.965180 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:56:13.965185 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:56:13.965190 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965196 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:56:13.965203 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:56:13.965208 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:56:13.965214 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965219 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:56:13.965224 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:56:13.965229 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:56:13.965234 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965239 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:56:13.965244 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:56:13.965250 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:56:13.965255 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965260 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:56:13.965284 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:56:13.965292 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:56:13.965297 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965302 | orchestrator | 2026-04-09 00:56:13.965306 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 00:56:13.965311 | orchestrator | Thursday 09 April 2026 00:48:33 +0000 (0:00:00.633) 0:03:00.546 ******** 2026-04-09 00:56:13.965317 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965322 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965327 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965332 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965337 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965342 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965348 | orchestrator | 2026-04-09 00:56:13.965353 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 00:56:13.965358 | orchestrator | Thursday 09 April 2026 00:48:33 +0000 (0:00:00.702) 0:03:01.249 ******** 2026-04-09 00:56:13.965363 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965368 | orchestrator | 2026-04-09 00:56:13.965374 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 00:56:13.965379 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:00.268) 0:03:01.517 ******** 2026-04-09 00:56:13.965384 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965389 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965394 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965399 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965405 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965410 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965415 | orchestrator | 2026-04-09 00:56:13.965420 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 00:56:13.965425 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:00.609) 0:03:02.127 ******** 2026-04-09 00:56:13.965430 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965436 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965441 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965446 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965451 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965456 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965461 | orchestrator | 2026-04-09 00:56:13.965466 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 00:56:13.965471 | orchestrator | Thursday 09 April 2026 00:48:35 +0000 (0:00:00.992) 0:03:03.119 ******** 2026-04-09 00:56:13.965476 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965481 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965487 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965492 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965497 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965502 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965508 | orchestrator | 2026-04-09 00:56:13.965513 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 00:56:13.965518 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:00.676) 0:03:03.796 ******** 2026-04-09 00:56:13.965523 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.965528 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.965533 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.965538 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.965544 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.965548 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.965554 | orchestrator | 2026-04-09 00:56:13.965559 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 00:56:13.965564 | orchestrator | Thursday 09 April 2026 00:48:38 +0000 (0:00:02.038) 0:03:05.835 ******** 2026-04-09 00:56:13.965574 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.965579 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.965584 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.965589 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.965594 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.965599 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.965604 | orchestrator | 2026-04-09 00:56:13.965609 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 00:56:13.965615 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:00.574) 0:03:06.409 ******** 2026-04-09 00:56:13.965620 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.965626 | orchestrator | 2026-04-09 00:56:13.965631 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 00:56:13.965636 | orchestrator | Thursday 09 April 2026 00:48:40 +0000 (0:00:01.105) 0:03:07.515 ******** 2026-04-09 00:56:13.965641 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965646 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965651 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965660 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965665 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965670 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965675 | orchestrator | 2026-04-09 00:56:13.965680 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 00:56:13.965685 | orchestrator | Thursday 09 April 2026 00:48:40 +0000 (0:00:00.594) 0:03:08.110 ******** 2026-04-09 00:56:13.965690 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965695 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965700 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965706 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965711 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965716 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965721 | orchestrator | 2026-04-09 00:56:13.965727 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 00:56:13.965732 | orchestrator | Thursday 09 April 2026 00:48:41 +0000 (0:00:00.794) 0:03:08.904 ******** 2026-04-09 00:56:13.965737 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965742 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965747 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965752 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965757 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965779 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965786 | orchestrator | 2026-04-09 00:56:13.965791 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 00:56:13.965796 | orchestrator | Thursday 09 April 2026 00:48:42 +0000 (0:00:00.801) 0:03:09.706 ******** 2026-04-09 00:56:13.965801 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965807 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965811 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965817 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965822 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965827 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965860 | orchestrator | 2026-04-09 00:56:13.965866 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 00:56:13.965871 | orchestrator | Thursday 09 April 2026 00:48:43 +0000 (0:00:00.849) 0:03:10.556 ******** 2026-04-09 00:56:13.965877 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965882 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965887 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965892 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965897 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965902 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965912 | orchestrator | 2026-04-09 00:56:13.965917 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 00:56:13.965922 | orchestrator | Thursday 09 April 2026 00:48:44 +0000 (0:00:00.709) 0:03:11.266 ******** 2026-04-09 00:56:13.965928 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965933 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965938 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965943 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965948 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.965953 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.965958 | orchestrator | 2026-04-09 00:56:13.965964 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 00:56:13.965969 | orchestrator | Thursday 09 April 2026 00:48:44 +0000 (0:00:00.724) 0:03:11.991 ******** 2026-04-09 00:56:13.965974 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.965980 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.965985 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.965990 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.965995 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.966001 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.966006 | orchestrator | 2026-04-09 00:56:13.966011 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 00:56:13.966039 | orchestrator | Thursday 09 April 2026 00:48:45 +0000 (0:00:00.443) 0:03:12.435 ******** 2026-04-09 00:56:13.966044 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.966049 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.966054 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.966059 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.966065 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.966070 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.966075 | orchestrator | 2026-04-09 00:56:13.966080 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 00:56:13.966086 | orchestrator | Thursday 09 April 2026 00:48:45 +0000 (0:00:00.780) 0:03:13.215 ******** 2026-04-09 00:56:13.966091 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.966096 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.966102 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.966107 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.966112 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.966117 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.966122 | orchestrator | 2026-04-09 00:56:13.966127 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 00:56:13.966132 | orchestrator | Thursday 09 April 2026 00:48:46 +0000 (0:00:00.983) 0:03:14.198 ******** 2026-04-09 00:56:13.966137 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.966143 | orchestrator | 2026-04-09 00:56:13.966148 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 00:56:13.966153 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.924) 0:03:15.123 ******** 2026-04-09 00:56:13.966159 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-09 00:56:13.966164 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-09 00:56:13.966169 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-09 00:56:13.966174 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-09 00:56:13.966180 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-09 00:56:13.966185 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-09 00:56:13.966193 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-09 00:56:13.966198 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-09 00:56:13.966203 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-09 00:56:13.966208 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-09 00:56:13.966217 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-09 00:56:13.966223 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-09 00:56:13.966228 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-09 00:56:13.966233 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-09 00:56:13.966238 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-09 00:56:13.966243 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-09 00:56:13.966248 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-09 00:56:13.966254 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-09 00:56:13.966259 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-09 00:56:13.966264 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-09 00:56:13.966288 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-09 00:56:13.966294 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-09 00:56:13.966299 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-09 00:56:13.966305 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-09 00:56:13.966310 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-09 00:56:13.966315 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-09 00:56:13.966320 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-09 00:56:13.966325 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-09 00:56:13.966331 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-09 00:56:13.966336 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-09 00:56:13.966341 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-09 00:56:13.966346 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-09 00:56:13.966351 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-09 00:56:13.966356 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-09 00:56:13.966362 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-09 00:56:13.966367 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-09 00:56:13.966372 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-09 00:56:13.966377 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-09 00:56:13.966382 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-09 00:56:13.966388 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-09 00:56:13.966393 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:56:13.966398 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:56:13.966403 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-09 00:56:13.966408 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:56:13.966413 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-09 00:56:13.966418 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:56:13.966423 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:56:13.966428 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:56:13.966433 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:56:13.966438 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:56:13.966444 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:56:13.966448 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:56:13.966453 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:56:13.966465 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:56:13.966470 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:56:13.966475 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:56:13.966481 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:56:13.966486 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:56:13.966491 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:56:13.966496 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:56:13.966502 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:56:13.966507 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:56:13.966512 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:56:13.966517 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:56:13.966522 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:56:13.966527 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:56:13.966535 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:56:13.966541 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:56:13.966546 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:56:13.966551 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:56:13.966556 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:56:13.966562 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:56:13.966567 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:56:13.966572 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:56:13.966577 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:56:13.966582 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:56:13.966587 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:56:13.966592 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:56:13.966614 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:56:13.966620 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:56:13.966625 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:56:13.966630 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:56:13.966636 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-09 00:56:13.966641 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-09 00:56:13.966646 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:56:13.966651 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:56:13.966656 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-09 00:56:13.966662 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-09 00:56:13.966667 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-09 00:56:13.966672 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-09 00:56:13.966677 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-09 00:56:13.966682 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-09 00:56:13.966687 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-09 00:56:13.966696 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-09 00:56:13.966701 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-09 00:56:13.966706 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-09 00:56:13.966711 | orchestrator | 2026-04-09 00:56:13.966717 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 00:56:13.966722 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:06.691) 0:03:21.814 ******** 2026-04-09 00:56:13.966727 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.966732 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.966737 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.966743 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.966748 | orchestrator | 2026-04-09 00:56:13.966754 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 00:56:13.966759 | orchestrator | Thursday 09 April 2026 00:48:55 +0000 (0:00:00.959) 0:03:22.774 ******** 2026-04-09 00:56:13.966764 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.966769 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.966775 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.966780 | orchestrator | 2026-04-09 00:56:13.966785 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 00:56:13.966790 | orchestrator | Thursday 09 April 2026 00:48:56 +0000 (0:00:00.734) 0:03:23.509 ******** 2026-04-09 00:56:13.966795 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.966800 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.966805 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.966810 | orchestrator | 2026-04-09 00:56:13.966815 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 00:56:13.966820 | orchestrator | Thursday 09 April 2026 00:48:57 +0000 (0:00:01.613) 0:03:25.123 ******** 2026-04-09 00:56:13.966825 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.966830 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.966862 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.966867 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.966873 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.966878 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.966883 | orchestrator | 2026-04-09 00:56:13.966892 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 00:56:13.966898 | orchestrator | Thursday 09 April 2026 00:48:58 +0000 (0:00:00.603) 0:03:25.726 ******** 2026-04-09 00:56:13.966903 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.966909 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.966914 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.966919 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.966925 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.966930 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.966936 | orchestrator | 2026-04-09 00:56:13.966941 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 00:56:13.966946 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:00.652) 0:03:26.379 ******** 2026-04-09 00:56:13.966952 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.966957 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.966966 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.966971 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.966977 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.966982 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.966988 | orchestrator | 2026-04-09 00:56:13.966993 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 00:56:13.966999 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:00.553) 0:03:26.933 ******** 2026-04-09 00:56:13.967022 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967028 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967033 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967038 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967043 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967048 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967053 | orchestrator | 2026-04-09 00:56:13.967058 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 00:56:13.967063 | orchestrator | Thursday 09 April 2026 00:49:00 +0000 (0:00:00.510) 0:03:27.443 ******** 2026-04-09 00:56:13.967069 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967074 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967079 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967084 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967090 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967095 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967100 | orchestrator | 2026-04-09 00:56:13.967105 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 00:56:13.967110 | orchestrator | Thursday 09 April 2026 00:49:00 +0000 (0:00:00.704) 0:03:28.148 ******** 2026-04-09 00:56:13.967115 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967120 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967126 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967131 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967136 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967141 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967145 | orchestrator | 2026-04-09 00:56:13.967150 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 00:56:13.967155 | orchestrator | Thursday 09 April 2026 00:49:01 +0000 (0:00:00.480) 0:03:28.628 ******** 2026-04-09 00:56:13.967161 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967166 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967172 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967177 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967182 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967187 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967192 | orchestrator | 2026-04-09 00:56:13.967197 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 00:56:13.967203 | orchestrator | Thursday 09 April 2026 00:49:02 +0000 (0:00:00.689) 0:03:29.318 ******** 2026-04-09 00:56:13.967208 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967213 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967218 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967224 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967229 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967234 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967239 | orchestrator | 2026-04-09 00:56:13.967244 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 00:56:13.967250 | orchestrator | Thursday 09 April 2026 00:49:02 +0000 (0:00:00.523) 0:03:29.841 ******** 2026-04-09 00:56:13.967255 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967260 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967265 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967276 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.967282 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.967287 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.967292 | orchestrator | 2026-04-09 00:56:13.967297 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 00:56:13.967302 | orchestrator | Thursday 09 April 2026 00:49:04 +0000 (0:00:02.039) 0:03:31.881 ******** 2026-04-09 00:56:13.967308 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967312 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967318 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967323 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.967328 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.967333 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.967338 | orchestrator | 2026-04-09 00:56:13.967343 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 00:56:13.967349 | orchestrator | Thursday 09 April 2026 00:49:05 +0000 (0:00:00.666) 0:03:32.548 ******** 2026-04-09 00:56:13.967354 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967359 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967364 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967369 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.967374 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.967379 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.967384 | orchestrator | 2026-04-09 00:56:13.967389 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 00:56:13.967395 | orchestrator | Thursday 09 April 2026 00:49:06 +0000 (0:00:00.847) 0:03:33.396 ******** 2026-04-09 00:56:13.967400 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967405 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967410 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967415 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967420 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967425 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967430 | orchestrator | 2026-04-09 00:56:13.967435 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 00:56:13.967440 | orchestrator | Thursday 09 April 2026 00:49:06 +0000 (0:00:00.575) 0:03:33.971 ******** 2026-04-09 00:56:13.967446 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967450 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967455 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.967460 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967465 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.967470 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.967476 | orchestrator | 2026-04-09 00:56:13.967498 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 00:56:13.967504 | orchestrator | Thursday 09 April 2026 00:49:07 +0000 (0:00:00.823) 0:03:34.794 ******** 2026-04-09 00:56:13.967509 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967514 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967520 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967526 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-09 00:56:13.967533 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-09 00:56:13.967543 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-09 00:56:13.967550 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-09 00:56:13.967555 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967561 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967566 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-09 00:56:13.967571 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-09 00:56:13.967577 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967582 | orchestrator | 2026-04-09 00:56:13.967587 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 00:56:13.967592 | orchestrator | Thursday 09 April 2026 00:49:08 +0000 (0:00:00.810) 0:03:35.605 ******** 2026-04-09 00:56:13.967597 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967602 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967607 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967613 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967618 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967623 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967628 | orchestrator | 2026-04-09 00:56:13.967633 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 00:56:13.967639 | orchestrator | Thursday 09 April 2026 00:49:09 +0000 (0:00:00.723) 0:03:36.328 ******** 2026-04-09 00:56:13.967644 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967649 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967654 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967659 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967665 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967670 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967675 | orchestrator | 2026-04-09 00:56:13.967702 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 00:56:13.967718 | orchestrator | Thursday 09 April 2026 00:49:09 +0000 (0:00:00.577) 0:03:36.906 ******** 2026-04-09 00:56:13.967724 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967729 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967734 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967740 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967745 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967750 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967755 | orchestrator | 2026-04-09 00:56:13.967760 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 00:56:13.967765 | orchestrator | Thursday 09 April 2026 00:49:10 +0000 (0:00:01.127) 0:03:38.034 ******** 2026-04-09 00:56:13.967771 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967776 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967781 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967789 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967794 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967800 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967805 | orchestrator | 2026-04-09 00:56:13.967811 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 00:56:13.967816 | orchestrator | Thursday 09 April 2026 00:49:11 +0000 (0:00:00.525) 0:03:38.559 ******** 2026-04-09 00:56:13.967821 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967854 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967861 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967866 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.967871 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.967877 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.967882 | orchestrator | 2026-04-09 00:56:13.967887 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 00:56:13.967891 | orchestrator | Thursday 09 April 2026 00:49:11 +0000 (0:00:00.666) 0:03:39.226 ******** 2026-04-09 00:56:13.967896 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967901 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.967907 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.967912 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.967917 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.967922 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.967927 | orchestrator | 2026-04-09 00:56:13.967933 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 00:56:13.967938 | orchestrator | Thursday 09 April 2026 00:49:12 +0000 (0:00:00.596) 0:03:39.822 ******** 2026-04-09 00:56:13.967943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 00:56:13.967948 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 00:56:13.967953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 00:56:13.967959 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.967964 | orchestrator | 2026-04-09 00:56:13.967969 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 00:56:13.967973 | orchestrator | Thursday 09 April 2026 00:49:13 +0000 (0:00:00.500) 0:03:40.322 ******** 2026-04-09 00:56:13.967979 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 00:56:13.967985 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 00:56:13.967989 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 00:56:13.967995 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.968000 | orchestrator | 2026-04-09 00:56:13.968005 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 00:56:13.968010 | orchestrator | Thursday 09 April 2026 00:49:13 +0000 (0:00:00.522) 0:03:40.845 ******** 2026-04-09 00:56:13.968015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 00:56:13.968020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 00:56:13.968025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 00:56:13.968030 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.968035 | orchestrator | 2026-04-09 00:56:13.968040 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 00:56:13.968045 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:00.712) 0:03:41.557 ******** 2026-04-09 00:56:13.968050 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.968055 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.968060 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.968065 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.968070 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.968076 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.968081 | orchestrator | 2026-04-09 00:56:13.968086 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 00:56:13.968095 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:00.529) 0:03:42.087 ******** 2026-04-09 00:56:13.968100 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-09 00:56:13.968105 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.968110 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-09 00:56:13.968115 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.968121 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-09 00:56:13.968126 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.968131 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:56:13.968136 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 00:56:13.968141 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 00:56:13.968146 | orchestrator | 2026-04-09 00:56:13.968151 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 00:56:13.968157 | orchestrator | Thursday 09 April 2026 00:49:16 +0000 (0:00:01.681) 0:03:43.768 ******** 2026-04-09 00:56:13.968162 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.968167 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.968172 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.968177 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.968182 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.968187 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.968192 | orchestrator | 2026-04-09 00:56:13.968197 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:56:13.968203 | orchestrator | Thursday 09 April 2026 00:49:18 +0000 (0:00:02.418) 0:03:46.187 ******** 2026-04-09 00:56:13.968211 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.968216 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.968222 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.968227 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.968232 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.968237 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.968243 | orchestrator | 2026-04-09 00:56:13.968248 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-09 00:56:13.968253 | orchestrator | Thursday 09 April 2026 00:49:19 +0000 (0:00:00.907) 0:03:47.095 ******** 2026-04-09 00:56:13.968258 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968263 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.968269 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.968274 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.968279 | orchestrator | 2026-04-09 00:56:13.968284 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-09 00:56:13.968289 | orchestrator | Thursday 09 April 2026 00:49:20 +0000 (0:00:00.905) 0:03:48.000 ******** 2026-04-09 00:56:13.968295 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.968300 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.968305 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.968326 | orchestrator | 2026-04-09 00:56:13.968332 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-09 00:56:13.968337 | orchestrator | Thursday 09 April 2026 00:49:21 +0000 (0:00:00.272) 0:03:48.273 ******** 2026-04-09 00:56:13.968342 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.968347 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.968352 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.968357 | orchestrator | 2026-04-09 00:56:13.968362 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-09 00:56:13.968368 | orchestrator | Thursday 09 April 2026 00:49:22 +0000 (0:00:01.101) 0:03:49.375 ******** 2026-04-09 00:56:13.968372 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:56:13.968377 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:56:13.968382 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:56:13.968387 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.968396 | orchestrator | 2026-04-09 00:56:13.968402 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-09 00:56:13.968407 | orchestrator | Thursday 09 April 2026 00:49:22 +0000 (0:00:00.841) 0:03:50.217 ******** 2026-04-09 00:56:13.968412 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.968417 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.968423 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.968428 | orchestrator | 2026-04-09 00:56:13.968433 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-09 00:56:13.968438 | orchestrator | Thursday 09 April 2026 00:49:23 +0000 (0:00:00.507) 0:03:50.725 ******** 2026-04-09 00:56:13.968443 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.968448 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.968453 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.968458 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-04-09 00:56:13.968463 | orchestrator | 2026-04-09 00:56:13.968468 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-09 00:56:13.968473 | orchestrator | Thursday 09 April 2026 00:49:24 +0000 (0:00:00.898) 0:03:51.623 ******** 2026-04-09 00:56:13.968478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.968484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.968489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.968494 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968500 | orchestrator | 2026-04-09 00:56:13.968505 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-09 00:56:13.968510 | orchestrator | Thursday 09 April 2026 00:49:24 +0000 (0:00:00.601) 0:03:52.225 ******** 2026-04-09 00:56:13.968515 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968520 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.968525 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.968530 | orchestrator | 2026-04-09 00:56:13.968535 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-09 00:56:13.968540 | orchestrator | Thursday 09 April 2026 00:49:25 +0000 (0:00:00.571) 0:03:52.796 ******** 2026-04-09 00:56:13.968545 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968550 | orchestrator | 2026-04-09 00:56:13.968555 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-09 00:56:13.968561 | orchestrator | Thursday 09 April 2026 00:49:25 +0000 (0:00:00.213) 0:03:53.009 ******** 2026-04-09 00:56:13.968566 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968571 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.968576 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.968581 | orchestrator | 2026-04-09 00:56:13.968586 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-09 00:56:13.968592 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.326) 0:03:53.336 ******** 2026-04-09 00:56:13.968597 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968602 | orchestrator | 2026-04-09 00:56:13.968607 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-09 00:56:13.968612 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.229) 0:03:53.566 ******** 2026-04-09 00:56:13.968617 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968622 | orchestrator | 2026-04-09 00:56:13.968628 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-09 00:56:13.968632 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.242) 0:03:53.809 ******** 2026-04-09 00:56:13.968638 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968643 | orchestrator | 2026-04-09 00:56:13.968648 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-09 00:56:13.968656 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.171) 0:03:53.980 ******** 2026-04-09 00:56:13.968665 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968670 | orchestrator | 2026-04-09 00:56:13.968675 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-09 00:56:13.968680 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:00.207) 0:03:54.188 ******** 2026-04-09 00:56:13.968685 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968690 | orchestrator | 2026-04-09 00:56:13.968696 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-09 00:56:13.968701 | orchestrator | Thursday 09 April 2026 00:49:27 +0000 (0:00:00.228) 0:03:54.416 ******** 2026-04-09 00:56:13.968706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.968711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.968716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.968721 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968726 | orchestrator | 2026-04-09 00:56:13.968731 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-09 00:56:13.968736 | orchestrator | Thursday 09 April 2026 00:49:27 +0000 (0:00:00.630) 0:03:55.047 ******** 2026-04-09 00:56:13.968742 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968763 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.968769 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.968775 | orchestrator | 2026-04-09 00:56:13.968780 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-09 00:56:13.968785 | orchestrator | Thursday 09 April 2026 00:49:28 +0000 (0:00:00.623) 0:03:55.671 ******** 2026-04-09 00:56:13.968790 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968795 | orchestrator | 2026-04-09 00:56:13.968801 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-09 00:56:13.968805 | orchestrator | Thursday 09 April 2026 00:49:28 +0000 (0:00:00.206) 0:03:55.878 ******** 2026-04-09 00:56:13.968810 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968815 | orchestrator | 2026-04-09 00:56:13.968821 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-09 00:56:13.968826 | orchestrator | Thursday 09 April 2026 00:49:28 +0000 (0:00:00.226) 0:03:56.105 ******** 2026-04-09 00:56:13.968855 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.968861 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.968866 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.968871 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.968876 | orchestrator | 2026-04-09 00:56:13.968881 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-09 00:56:13.968886 | orchestrator | Thursday 09 April 2026 00:49:29 +0000 (0:00:00.971) 0:03:57.077 ******** 2026-04-09 00:56:13.968890 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.968894 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.968899 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.968904 | orchestrator | 2026-04-09 00:56:13.968909 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-09 00:56:13.968914 | orchestrator | Thursday 09 April 2026 00:49:30 +0000 (0:00:00.344) 0:03:57.421 ******** 2026-04-09 00:56:13.968919 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.968925 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.968930 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.968935 | orchestrator | 2026-04-09 00:56:13.968940 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-09 00:56:13.968946 | orchestrator | Thursday 09 April 2026 00:49:31 +0000 (0:00:01.654) 0:03:59.076 ******** 2026-04-09 00:56:13.968951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.968956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.968961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.968966 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.968976 | orchestrator | 2026-04-09 00:56:13.968981 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-09 00:56:13.968986 | orchestrator | Thursday 09 April 2026 00:49:32 +0000 (0:00:00.833) 0:03:59.909 ******** 2026-04-09 00:56:13.968992 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.968997 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.969002 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.969007 | orchestrator | 2026-04-09 00:56:13.969012 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-09 00:56:13.969017 | orchestrator | Thursday 09 April 2026 00:49:33 +0000 (0:00:00.385) 0:04:00.294 ******** 2026-04-09 00:56:13.969022 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969027 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969032 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969038 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.969043 | orchestrator | 2026-04-09 00:56:13.969048 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-09 00:56:13.969053 | orchestrator | Thursday 09 April 2026 00:49:34 +0000 (0:00:01.064) 0:04:01.359 ******** 2026-04-09 00:56:13.969058 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.969063 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.969068 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.969073 | orchestrator | 2026-04-09 00:56:13.969079 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-09 00:56:13.969084 | orchestrator | Thursday 09 April 2026 00:49:34 +0000 (0:00:00.627) 0:04:01.986 ******** 2026-04-09 00:56:13.969089 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.969094 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.969099 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.969105 | orchestrator | 2026-04-09 00:56:13.969110 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-09 00:56:13.969115 | orchestrator | Thursday 09 April 2026 00:49:37 +0000 (0:00:02.287) 0:04:04.274 ******** 2026-04-09 00:56:13.969120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.969129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.969134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.969139 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.969144 | orchestrator | 2026-04-09 00:56:13.969150 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-09 00:56:13.969155 | orchestrator | Thursday 09 April 2026 00:49:37 +0000 (0:00:00.849) 0:04:05.123 ******** 2026-04-09 00:56:13.969160 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.969165 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.969170 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.969175 | orchestrator | 2026-04-09 00:56:13.969180 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-09 00:56:13.969185 | orchestrator | Thursday 09 April 2026 00:49:38 +0000 (0:00:00.316) 0:04:05.440 ******** 2026-04-09 00:56:13.969191 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969196 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969201 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969206 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.969211 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.969216 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.969221 | orchestrator | 2026-04-09 00:56:13.969246 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-09 00:56:13.969253 | orchestrator | Thursday 09 April 2026 00:49:38 +0000 (0:00:00.527) 0:04:05.968 ******** 2026-04-09 00:56:13.969258 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.969263 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.969268 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.969277 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.969283 | orchestrator | 2026-04-09 00:56:13.969288 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-09 00:56:13.969293 | orchestrator | Thursday 09 April 2026 00:49:39 +0000 (0:00:00.949) 0:04:06.918 ******** 2026-04-09 00:56:13.969298 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.969303 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.969308 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.969313 | orchestrator | 2026-04-09 00:56:13.969318 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-09 00:56:13.969324 | orchestrator | Thursday 09 April 2026 00:49:39 +0000 (0:00:00.298) 0:04:07.216 ******** 2026-04-09 00:56:13.969329 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.969334 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.969339 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.969344 | orchestrator | 2026-04-09 00:56:13.969349 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-09 00:56:13.969354 | orchestrator | Thursday 09 April 2026 00:49:41 +0000 (0:00:01.476) 0:04:08.693 ******** 2026-04-09 00:56:13.969360 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:56:13.969365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:56:13.969370 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:56:13.969376 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969381 | orchestrator | 2026-04-09 00:56:13.969386 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-09 00:56:13.969391 | orchestrator | Thursday 09 April 2026 00:49:41 +0000 (0:00:00.546) 0:04:09.239 ******** 2026-04-09 00:56:13.969396 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.969401 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.969406 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.969411 | orchestrator | 2026-04-09 00:56:13.969417 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-09 00:56:13.969422 | orchestrator | 2026-04-09 00:56:13.969427 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:56:13.969432 | orchestrator | Thursday 09 April 2026 00:49:42 +0000 (0:00:00.520) 0:04:09.759 ******** 2026-04-09 00:56:13.969438 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.969443 | orchestrator | 2026-04-09 00:56:13.969448 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:56:13.969453 | orchestrator | Thursday 09 April 2026 00:49:43 +0000 (0:00:00.595) 0:04:10.354 ******** 2026-04-09 00:56:13.969458 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.969463 | orchestrator | 2026-04-09 00:56:13.969468 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:56:13.969474 | orchestrator | Thursday 09 April 2026 00:49:43 +0000 (0:00:00.458) 0:04:10.812 ******** 2026-04-09 00:56:13.969479 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.969484 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.969489 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.969494 | orchestrator | 2026-04-09 00:56:13.969499 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:56:13.969504 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:00.597) 0:04:11.410 ******** 2026-04-09 00:56:13.969509 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969514 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969520 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969525 | orchestrator | 2026-04-09 00:56:13.969530 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:56:13.969535 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:00.449) 0:04:11.860 ******** 2026-04-09 00:56:13.969543 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969549 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969554 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969559 | orchestrator | 2026-04-09 00:56:13.969564 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:56:13.969569 | orchestrator | Thursday 09 April 2026 00:49:44 +0000 (0:00:00.250) 0:04:12.110 ******** 2026-04-09 00:56:13.969574 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969580 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969589 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969594 | orchestrator | 2026-04-09 00:56:13.969599 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:56:13.969604 | orchestrator | Thursday 09 April 2026 00:49:45 +0000 (0:00:00.265) 0:04:12.375 ******** 2026-04-09 00:56:13.969609 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.969614 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.969619 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.969625 | orchestrator | 2026-04-09 00:56:13.969630 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:56:13.969635 | orchestrator | Thursday 09 April 2026 00:49:45 +0000 (0:00:00.723) 0:04:13.099 ******** 2026-04-09 00:56:13.969640 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969646 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969651 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969656 | orchestrator | 2026-04-09 00:56:13.969661 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:56:13.969666 | orchestrator | Thursday 09 April 2026 00:49:46 +0000 (0:00:00.483) 0:04:13.582 ******** 2026-04-09 00:56:13.969671 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969676 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969682 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969687 | orchestrator | 2026-04-09 00:56:13.969709 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:56:13.969715 | orchestrator | Thursday 09 April 2026 00:49:46 +0000 (0:00:00.305) 0:04:13.888 ******** 2026-04-09 00:56:13.969720 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.969725 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.969730 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.969735 | orchestrator | 2026-04-09 00:56:13.969740 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:56:13.969746 | orchestrator | Thursday 09 April 2026 00:49:47 +0000 (0:00:00.708) 0:04:14.596 ******** 2026-04-09 00:56:13.969751 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.969756 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.969761 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.969767 | orchestrator | 2026-04-09 00:56:13.969772 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:56:13.969777 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:00.724) 0:04:15.321 ******** 2026-04-09 00:56:13.969782 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969787 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969792 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969797 | orchestrator | 2026-04-09 00:56:13.969803 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:56:13.969808 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:00.272) 0:04:15.594 ******** 2026-04-09 00:56:13.969813 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.969818 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.969823 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.969829 | orchestrator | 2026-04-09 00:56:13.969849 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:56:13.969855 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:00.464) 0:04:16.059 ******** 2026-04-09 00:56:13.969860 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969872 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969877 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969883 | orchestrator | 2026-04-09 00:56:13.969888 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:56:13.969893 | orchestrator | Thursday 09 April 2026 00:49:49 +0000 (0:00:00.268) 0:04:16.328 ******** 2026-04-09 00:56:13.969898 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969903 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969909 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969914 | orchestrator | 2026-04-09 00:56:13.969919 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:56:13.969924 | orchestrator | Thursday 09 April 2026 00:49:49 +0000 (0:00:00.344) 0:04:16.672 ******** 2026-04-09 00:56:13.969929 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969935 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969940 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969944 | orchestrator | 2026-04-09 00:56:13.969949 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:56:13.969954 | orchestrator | Thursday 09 April 2026 00:49:49 +0000 (0:00:00.467) 0:04:17.140 ******** 2026-04-09 00:56:13.969959 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969964 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.969969 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.969974 | orchestrator | 2026-04-09 00:56:13.969980 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:56:13.969985 | orchestrator | Thursday 09 April 2026 00:49:50 +0000 (0:00:00.522) 0:04:17.662 ******** 2026-04-09 00:56:13.969990 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.969995 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.970000 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.970005 | orchestrator | 2026-04-09 00:56:13.970011 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:56:13.970039 | orchestrator | Thursday 09 April 2026 00:49:50 +0000 (0:00:00.425) 0:04:18.087 ******** 2026-04-09 00:56:13.970045 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.970050 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.970056 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.970062 | orchestrator | 2026-04-09 00:56:13.970067 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:56:13.970073 | orchestrator | Thursday 09 April 2026 00:49:51 +0000 (0:00:00.377) 0:04:18.465 ******** 2026-04-09 00:56:13.970078 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.970083 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.970089 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.970094 | orchestrator | 2026-04-09 00:56:13.970099 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:56:13.970105 | orchestrator | Thursday 09 April 2026 00:49:51 +0000 (0:00:00.483) 0:04:18.948 ******** 2026-04-09 00:56:13.970111 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.970116 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.970122 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.970127 | orchestrator | 2026-04-09 00:56:13.970136 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-09 00:56:13.970141 | orchestrator | Thursday 09 April 2026 00:49:52 +0000 (0:00:00.731) 0:04:19.679 ******** 2026-04-09 00:56:13.970147 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.970152 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.970157 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.970163 | orchestrator | 2026-04-09 00:56:13.970168 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-09 00:56:13.970174 | orchestrator | Thursday 09 April 2026 00:49:52 +0000 (0:00:00.351) 0:04:20.031 ******** 2026-04-09 00:56:13.970180 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.970190 | orchestrator | 2026-04-09 00:56:13.970196 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-09 00:56:13.970201 | orchestrator | Thursday 09 April 2026 00:49:53 +0000 (0:00:00.755) 0:04:20.787 ******** 2026-04-09 00:56:13.970206 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.970211 | orchestrator | 2026-04-09 00:56:13.970217 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-09 00:56:13.970243 | orchestrator | Thursday 09 April 2026 00:49:53 +0000 (0:00:00.134) 0:04:20.921 ******** 2026-04-09 00:56:13.970248 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:56:13.970254 | orchestrator | 2026-04-09 00:56:13.970259 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-09 00:56:13.970264 | orchestrator | Thursday 09 April 2026 00:49:55 +0000 (0:00:01.340) 0:04:22.262 ******** 2026-04-09 00:56:13.970269 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.970274 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.970279 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.970283 | orchestrator | 2026-04-09 00:56:13.970288 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-09 00:56:13.970294 | orchestrator | Thursday 09 April 2026 00:49:55 +0000 (0:00:00.448) 0:04:22.710 ******** 2026-04-09 00:56:13.970298 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.970303 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.970308 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.970313 | orchestrator | 2026-04-09 00:56:13.970318 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-09 00:56:13.970323 | orchestrator | Thursday 09 April 2026 00:49:55 +0000 (0:00:00.450) 0:04:23.161 ******** 2026-04-09 00:56:13.970328 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970333 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.970338 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.970343 | orchestrator | 2026-04-09 00:56:13.970349 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-09 00:56:13.970354 | orchestrator | Thursday 09 April 2026 00:49:57 +0000 (0:00:01.216) 0:04:24.378 ******** 2026-04-09 00:56:13.970359 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970365 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.970370 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.970375 | orchestrator | 2026-04-09 00:56:13.970380 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-09 00:56:13.970385 | orchestrator | Thursday 09 April 2026 00:49:58 +0000 (0:00:01.402) 0:04:25.780 ******** 2026-04-09 00:56:13.970391 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970396 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.970401 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.970406 | orchestrator | 2026-04-09 00:56:13.970411 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-09 00:56:13.970417 | orchestrator | Thursday 09 April 2026 00:49:59 +0000 (0:00:00.699) 0:04:26.480 ******** 2026-04-09 00:56:13.970422 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.970427 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.970432 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.970437 | orchestrator | 2026-04-09 00:56:13.970442 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-09 00:56:13.970447 | orchestrator | Thursday 09 April 2026 00:49:59 +0000 (0:00:00.754) 0:04:27.234 ******** 2026-04-09 00:56:13.970452 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970458 | orchestrator | 2026-04-09 00:56:13.970463 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-09 00:56:13.970468 | orchestrator | Thursday 09 April 2026 00:50:01 +0000 (0:00:01.160) 0:04:28.394 ******** 2026-04-09 00:56:13.970473 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.970478 | orchestrator | 2026-04-09 00:56:13.970483 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-09 00:56:13.970488 | orchestrator | Thursday 09 April 2026 00:50:01 +0000 (0:00:00.590) 0:04:28.985 ******** 2026-04-09 00:56:13.970497 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:56:13.970502 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.970508 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.970513 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:56:13.970518 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-09 00:56:13.970523 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:56:13.970528 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:56:13.970533 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-09 00:56:13.970538 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:56:13.970543 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-09 00:56:13.970549 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-09 00:56:13.970554 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-09 00:56:13.970559 | orchestrator | 2026-04-09 00:56:13.970564 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-09 00:56:13.970569 | orchestrator | Thursday 09 April 2026 00:50:05 +0000 (0:00:04.120) 0:04:33.105 ******** 2026-04-09 00:56:13.970574 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970582 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.970588 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.970593 | orchestrator | 2026-04-09 00:56:13.970599 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-09 00:56:13.970604 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:01.257) 0:04:34.363 ******** 2026-04-09 00:56:13.970609 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.970614 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.970620 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.970625 | orchestrator | 2026-04-09 00:56:13.970630 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-09 00:56:13.970635 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:00.285) 0:04:34.648 ******** 2026-04-09 00:56:13.970640 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.970645 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.970650 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.970655 | orchestrator | 2026-04-09 00:56:13.970660 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-09 00:56:13.970665 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:00.289) 0:04:34.938 ******** 2026-04-09 00:56:13.970670 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970675 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.970680 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.970685 | orchestrator | 2026-04-09 00:56:13.970709 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-09 00:56:13.970715 | orchestrator | Thursday 09 April 2026 00:50:09 +0000 (0:00:02.129) 0:04:37.068 ******** 2026-04-09 00:56:13.970720 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970725 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.970730 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.970735 | orchestrator | 2026-04-09 00:56:13.970740 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-09 00:56:13.970746 | orchestrator | Thursday 09 April 2026 00:50:11 +0000 (0:00:01.597) 0:04:38.665 ******** 2026-04-09 00:56:13.970751 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.970756 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.970762 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.970767 | orchestrator | 2026-04-09 00:56:13.970772 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-09 00:56:13.970777 | orchestrator | Thursday 09 April 2026 00:50:11 +0000 (0:00:00.315) 0:04:38.981 ******** 2026-04-09 00:56:13.970786 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.970792 | orchestrator | 2026-04-09 00:56:13.970797 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-09 00:56:13.970802 | orchestrator | Thursday 09 April 2026 00:50:12 +0000 (0:00:00.608) 0:04:39.589 ******** 2026-04-09 00:56:13.970807 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.970812 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.970818 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.970823 | orchestrator | 2026-04-09 00:56:13.970828 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-09 00:56:13.970842 | orchestrator | Thursday 09 April 2026 00:50:12 +0000 (0:00:00.447) 0:04:40.037 ******** 2026-04-09 00:56:13.970847 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.970852 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.970857 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.970862 | orchestrator | 2026-04-09 00:56:13.970867 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-09 00:56:13.970873 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:00.280) 0:04:40.317 ******** 2026-04-09 00:56:13.970878 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.970883 | orchestrator | 2026-04-09 00:56:13.970887 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-09 00:56:13.970892 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:00.560) 0:04:40.878 ******** 2026-04-09 00:56:13.970897 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970902 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.970907 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.970912 | orchestrator | 2026-04-09 00:56:13.970917 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-09 00:56:13.970922 | orchestrator | Thursday 09 April 2026 00:50:15 +0000 (0:00:02.064) 0:04:42.943 ******** 2026-04-09 00:56:13.970927 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970932 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.970935 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.970938 | orchestrator | 2026-04-09 00:56:13.970941 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-09 00:56:13.970944 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:01.549) 0:04:44.493 ******** 2026-04-09 00:56:13.970947 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970950 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.970955 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.970960 | orchestrator | 2026-04-09 00:56:13.970965 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-09 00:56:13.970970 | orchestrator | Thursday 09 April 2026 00:50:19 +0000 (0:00:01.938) 0:04:46.431 ******** 2026-04-09 00:56:13.970975 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.970980 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.970986 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.970991 | orchestrator | 2026-04-09 00:56:13.970996 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-09 00:56:13.971001 | orchestrator | Thursday 09 April 2026 00:50:22 +0000 (0:00:02.846) 0:04:49.278 ******** 2026-04-09 00:56:13.971006 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.971011 | orchestrator | 2026-04-09 00:56:13.971016 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-09 00:56:13.971021 | orchestrator | Thursday 09 April 2026 00:50:22 +0000 (0:00:00.629) 0:04:49.907 ******** 2026-04-09 00:56:13.971030 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-09 00:56:13.971036 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.971045 | orchestrator | 2026-04-09 00:56:13.971050 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-09 00:56:13.971055 | orchestrator | Thursday 09 April 2026 00:50:44 +0000 (0:00:21.556) 0:05:11.463 ******** 2026-04-09 00:56:13.971060 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.971065 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.971070 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.971075 | orchestrator | 2026-04-09 00:56:13.971081 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-09 00:56:13.971086 | orchestrator | Thursday 09 April 2026 00:50:50 +0000 (0:00:06.393) 0:05:17.857 ******** 2026-04-09 00:56:13.971091 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971096 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971101 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971106 | orchestrator | 2026-04-09 00:56:13.971111 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-09 00:56:13.971117 | orchestrator | Thursday 09 April 2026 00:50:50 +0000 (0:00:00.250) 0:05:18.107 ******** 2026-04-09 00:56:13.971145 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b0d8c73b19b77d230ba284739fa229236f44d311'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-09 00:56:13.971152 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b0d8c73b19b77d230ba284739fa229236f44d311'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-09 00:56:13.971158 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b0d8c73b19b77d230ba284739fa229236f44d311'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-09 00:56:13.971165 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b0d8c73b19b77d230ba284739fa229236f44d311'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-09 00:56:13.971170 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b0d8c73b19b77d230ba284739fa229236f44d311'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-09 00:56:13.971176 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b0d8c73b19b77d230ba284739fa229236f44d311'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b0d8c73b19b77d230ba284739fa229236f44d311'}])  2026-04-09 00:56:13.971181 | orchestrator | 2026-04-09 00:56:13.971186 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:56:13.971191 | orchestrator | Thursday 09 April 2026 00:51:02 +0000 (0:00:11.490) 0:05:29.598 ******** 2026-04-09 00:56:13.971196 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971201 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971206 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971216 | orchestrator | 2026-04-09 00:56:13.971221 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-09 00:56:13.971226 | orchestrator | Thursday 09 April 2026 00:51:02 +0000 (0:00:00.278) 0:05:29.877 ******** 2026-04-09 00:56:13.971231 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.971237 | orchestrator | 2026-04-09 00:56:13.971242 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-09 00:56:13.971247 | orchestrator | Thursday 09 April 2026 00:51:03 +0000 (0:00:00.686) 0:05:30.563 ******** 2026-04-09 00:56:13.971252 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.971257 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.971263 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.971268 | orchestrator | 2026-04-09 00:56:13.971273 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-09 00:56:13.971280 | orchestrator | Thursday 09 April 2026 00:51:03 +0000 (0:00:00.303) 0:05:30.867 ******** 2026-04-09 00:56:13.971285 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971291 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971296 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971300 | orchestrator | 2026-04-09 00:56:13.971305 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-09 00:56:13.971310 | orchestrator | Thursday 09 April 2026 00:51:03 +0000 (0:00:00.276) 0:05:31.143 ******** 2026-04-09 00:56:13.971316 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:56:13.971321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:56:13.971326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:56:13.971331 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971337 | orchestrator | 2026-04-09 00:56:13.971342 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-09 00:56:13.971347 | orchestrator | Thursday 09 April 2026 00:51:04 +0000 (0:00:00.707) 0:05:31.850 ******** 2026-04-09 00:56:13.971352 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.971357 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.971362 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.971367 | orchestrator | 2026-04-09 00:56:13.971391 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-09 00:56:13.971398 | orchestrator | 2026-04-09 00:56:13.971403 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:56:13.971408 | orchestrator | Thursday 09 April 2026 00:51:05 +0000 (0:00:00.686) 0:05:32.537 ******** 2026-04-09 00:56:13.971413 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.971418 | orchestrator | 2026-04-09 00:56:13.971423 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:56:13.971428 | orchestrator | Thursday 09 April 2026 00:51:05 +0000 (0:00:00.482) 0:05:33.019 ******** 2026-04-09 00:56:13.971433 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.971438 | orchestrator | 2026-04-09 00:56:13.971443 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:56:13.971449 | orchestrator | Thursday 09 April 2026 00:51:06 +0000 (0:00:00.643) 0:05:33.662 ******** 2026-04-09 00:56:13.971454 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.971459 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.971464 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.971469 | orchestrator | 2026-04-09 00:56:13.971474 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:56:13.971479 | orchestrator | Thursday 09 April 2026 00:51:07 +0000 (0:00:00.714) 0:05:34.377 ******** 2026-04-09 00:56:13.971484 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971489 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971499 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971504 | orchestrator | 2026-04-09 00:56:13.971509 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:56:13.971514 | orchestrator | Thursday 09 April 2026 00:51:07 +0000 (0:00:00.285) 0:05:34.662 ******** 2026-04-09 00:56:13.971519 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971525 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971530 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971536 | orchestrator | 2026-04-09 00:56:13.971541 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:56:13.971546 | orchestrator | Thursday 09 April 2026 00:51:07 +0000 (0:00:00.269) 0:05:34.932 ******** 2026-04-09 00:56:13.971551 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971556 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971561 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971566 | orchestrator | 2026-04-09 00:56:13.971571 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:56:13.971577 | orchestrator | Thursday 09 April 2026 00:51:08 +0000 (0:00:00.445) 0:05:35.377 ******** 2026-04-09 00:56:13.971582 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.971587 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.971592 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.971598 | orchestrator | 2026-04-09 00:56:13.971603 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:56:13.971608 | orchestrator | Thursday 09 April 2026 00:51:08 +0000 (0:00:00.656) 0:05:36.034 ******** 2026-04-09 00:56:13.971613 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971618 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971623 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971628 | orchestrator | 2026-04-09 00:56:13.971633 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:56:13.971638 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:00.250) 0:05:36.285 ******** 2026-04-09 00:56:13.971643 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971648 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971654 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971658 | orchestrator | 2026-04-09 00:56:13.971664 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:56:13.971669 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:00.312) 0:05:36.597 ******** 2026-04-09 00:56:13.971674 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.971679 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.971684 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.971689 | orchestrator | 2026-04-09 00:56:13.971695 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:56:13.971700 | orchestrator | Thursday 09 April 2026 00:51:10 +0000 (0:00:00.817) 0:05:37.414 ******** 2026-04-09 00:56:13.971705 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.971710 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.971715 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.971720 | orchestrator | 2026-04-09 00:56:13.971726 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:56:13.971731 | orchestrator | Thursday 09 April 2026 00:51:11 +0000 (0:00:00.968) 0:05:38.382 ******** 2026-04-09 00:56:13.971737 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971746 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971751 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971756 | orchestrator | 2026-04-09 00:56:13.971762 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:56:13.971767 | orchestrator | Thursday 09 April 2026 00:51:11 +0000 (0:00:00.285) 0:05:38.668 ******** 2026-04-09 00:56:13.971772 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.971777 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.971782 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.971787 | orchestrator | 2026-04-09 00:56:13.971796 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:56:13.971801 | orchestrator | Thursday 09 April 2026 00:51:11 +0000 (0:00:00.301) 0:05:38.970 ******** 2026-04-09 00:56:13.971806 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971812 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971817 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971822 | orchestrator | 2026-04-09 00:56:13.971828 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:56:13.971863 | orchestrator | Thursday 09 April 2026 00:51:11 +0000 (0:00:00.258) 0:05:39.228 ******** 2026-04-09 00:56:13.971869 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971875 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971902 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971909 | orchestrator | 2026-04-09 00:56:13.971915 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:56:13.971919 | orchestrator | Thursday 09 April 2026 00:51:12 +0000 (0:00:00.411) 0:05:39.639 ******** 2026-04-09 00:56:13.971925 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971930 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971935 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971941 | orchestrator | 2026-04-09 00:56:13.971946 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:56:13.971951 | orchestrator | Thursday 09 April 2026 00:51:12 +0000 (0:00:00.255) 0:05:39.895 ******** 2026-04-09 00:56:13.971956 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971961 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971966 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.971971 | orchestrator | 2026-04-09 00:56:13.971976 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:56:13.971981 | orchestrator | Thursday 09 April 2026 00:51:12 +0000 (0:00:00.250) 0:05:40.146 ******** 2026-04-09 00:56:13.971986 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.971992 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.971997 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.972003 | orchestrator | 2026-04-09 00:56:13.972008 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:56:13.972013 | orchestrator | Thursday 09 April 2026 00:51:13 +0000 (0:00:00.248) 0:05:40.395 ******** 2026-04-09 00:56:13.972018 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.972023 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.972028 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.972033 | orchestrator | 2026-04-09 00:56:13.972038 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:56:13.972044 | orchestrator | Thursday 09 April 2026 00:51:13 +0000 (0:00:00.461) 0:05:40.856 ******** 2026-04-09 00:56:13.972049 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.972054 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.972059 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.972064 | orchestrator | 2026-04-09 00:56:13.972069 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:56:13.972075 | orchestrator | Thursday 09 April 2026 00:51:13 +0000 (0:00:00.277) 0:05:41.134 ******** 2026-04-09 00:56:13.972080 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.972085 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.972090 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.972096 | orchestrator | 2026-04-09 00:56:13.972101 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-09 00:56:13.972106 | orchestrator | Thursday 09 April 2026 00:51:14 +0000 (0:00:00.477) 0:05:41.611 ******** 2026-04-09 00:56:13.972111 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:56:13.972116 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:56:13.972122 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:56:13.972131 | orchestrator | 2026-04-09 00:56:13.972136 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-09 00:56:13.972141 | orchestrator | Thursday 09 April 2026 00:51:15 +0000 (0:00:00.724) 0:05:42.336 ******** 2026-04-09 00:56:13.972146 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.972151 | orchestrator | 2026-04-09 00:56:13.972156 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-09 00:56:13.972161 | orchestrator | Thursday 09 April 2026 00:51:15 +0000 (0:00:00.668) 0:05:43.005 ******** 2026-04-09 00:56:13.972165 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.972170 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.972175 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.972180 | orchestrator | 2026-04-09 00:56:13.972185 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-09 00:56:13.972191 | orchestrator | Thursday 09 April 2026 00:51:16 +0000 (0:00:00.694) 0:05:43.699 ******** 2026-04-09 00:56:13.972196 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.972201 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.972206 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.972211 | orchestrator | 2026-04-09 00:56:13.972216 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-09 00:56:13.972221 | orchestrator | Thursday 09 April 2026 00:51:16 +0000 (0:00:00.254) 0:05:43.954 ******** 2026-04-09 00:56:13.972226 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:56:13.972232 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:56:13.972237 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:56:13.972245 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-09 00:56:13.972251 | orchestrator | 2026-04-09 00:56:13.972256 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-09 00:56:13.972261 | orchestrator | Thursday 09 April 2026 00:51:24 +0000 (0:00:07.907) 0:05:51.862 ******** 2026-04-09 00:56:13.972266 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.972271 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.972276 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.972281 | orchestrator | 2026-04-09 00:56:13.972286 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-09 00:56:13.972292 | orchestrator | Thursday 09 April 2026 00:51:25 +0000 (0:00:00.459) 0:05:52.321 ******** 2026-04-09 00:56:13.972297 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 00:56:13.972302 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 00:56:13.972308 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 00:56:13.972313 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 00:56:13.972318 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.972323 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.972328 | orchestrator | 2026-04-09 00:56:13.972350 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:56:13.972357 | orchestrator | Thursday 09 April 2026 00:51:26 +0000 (0:00:01.766) 0:05:54.088 ******** 2026-04-09 00:56:13.972362 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 00:56:13.972367 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 00:56:13.972372 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 00:56:13.972378 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-09 00:56:13.972383 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:56:13.972388 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-09 00:56:13.972393 | orchestrator | 2026-04-09 00:56:13.972399 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-09 00:56:13.972404 | orchestrator | Thursday 09 April 2026 00:51:28 +0000 (0:00:01.307) 0:05:55.395 ******** 2026-04-09 00:56:13.972416 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.972422 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.972427 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.972433 | orchestrator | 2026-04-09 00:56:13.972438 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-09 00:56:13.972443 | orchestrator | Thursday 09 April 2026 00:51:28 +0000 (0:00:00.646) 0:05:56.042 ******** 2026-04-09 00:56:13.972448 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.972453 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.972459 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.972464 | orchestrator | 2026-04-09 00:56:13.972469 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-09 00:56:13.972474 | orchestrator | Thursday 09 April 2026 00:51:29 +0000 (0:00:00.500) 0:05:56.542 ******** 2026-04-09 00:56:13.972479 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.972484 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.972489 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.972494 | orchestrator | 2026-04-09 00:56:13.972500 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-09 00:56:13.972505 | orchestrator | Thursday 09 April 2026 00:51:29 +0000 (0:00:00.287) 0:05:56.829 ******** 2026-04-09 00:56:13.972510 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.972515 | orchestrator | 2026-04-09 00:56:13.972520 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-09 00:56:13.972525 | orchestrator | Thursday 09 April 2026 00:51:30 +0000 (0:00:00.468) 0:05:57.298 ******** 2026-04-09 00:56:13.972530 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.972535 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.972541 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.972546 | orchestrator | 2026-04-09 00:56:13.972552 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-09 00:56:13.972557 | orchestrator | Thursday 09 April 2026 00:51:30 +0000 (0:00:00.591) 0:05:57.890 ******** 2026-04-09 00:56:13.972562 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.972567 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.972572 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.972577 | orchestrator | 2026-04-09 00:56:13.972582 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-09 00:56:13.972588 | orchestrator | Thursday 09 April 2026 00:51:30 +0000 (0:00:00.323) 0:05:58.214 ******** 2026-04-09 00:56:13.972593 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.972598 | orchestrator | 2026-04-09 00:56:13.972603 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-09 00:56:13.972609 | orchestrator | Thursday 09 April 2026 00:51:31 +0000 (0:00:00.498) 0:05:58.712 ******** 2026-04-09 00:56:13.972614 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.972619 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.972624 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.972629 | orchestrator | 2026-04-09 00:56:13.972634 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-09 00:56:13.972640 | orchestrator | Thursday 09 April 2026 00:51:33 +0000 (0:00:01.594) 0:06:00.307 ******** 2026-04-09 00:56:13.972645 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.972650 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.972655 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.972660 | orchestrator | 2026-04-09 00:56:13.972665 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-09 00:56:13.972670 | orchestrator | Thursday 09 April 2026 00:51:34 +0000 (0:00:01.149) 0:06:01.456 ******** 2026-04-09 00:56:13.972675 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.972680 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.972685 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.972694 | orchestrator | 2026-04-09 00:56:13.972702 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-09 00:56:13.972708 | orchestrator | Thursday 09 April 2026 00:51:36 +0000 (0:00:01.893) 0:06:03.350 ******** 2026-04-09 00:56:13.972713 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.972718 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.972723 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.972728 | orchestrator | 2026-04-09 00:56:13.972733 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-09 00:56:13.972739 | orchestrator | Thursday 09 April 2026 00:51:38 +0000 (0:00:02.051) 0:06:05.401 ******** 2026-04-09 00:56:13.972744 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.972749 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.972755 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-09 00:56:13.972760 | orchestrator | 2026-04-09 00:56:13.972765 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-09 00:56:13.972770 | orchestrator | Thursday 09 April 2026 00:51:38 +0000 (0:00:00.592) 0:06:05.994 ******** 2026-04-09 00:56:13.972775 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-09 00:56:13.972797 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-09 00:56:13.972803 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:56:13.972808 | orchestrator | 2026-04-09 00:56:13.972813 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-09 00:56:13.972818 | orchestrator | Thursday 09 April 2026 00:51:51 +0000 (0:00:13.009) 0:06:19.003 ******** 2026-04-09 00:56:13.972823 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:56:13.972828 | orchestrator | 2026-04-09 00:56:13.972845 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-09 00:56:13.972850 | orchestrator | Thursday 09 April 2026 00:51:53 +0000 (0:00:01.283) 0:06:20.287 ******** 2026-04-09 00:56:13.972855 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.972860 | orchestrator | 2026-04-09 00:56:13.972865 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-09 00:56:13.972870 | orchestrator | Thursday 09 April 2026 00:51:53 +0000 (0:00:00.264) 0:06:20.552 ******** 2026-04-09 00:56:13.972876 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.972881 | orchestrator | 2026-04-09 00:56:13.972886 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-09 00:56:13.972891 | orchestrator | Thursday 09 April 2026 00:51:53 +0000 (0:00:00.119) 0:06:20.671 ******** 2026-04-09 00:56:13.972896 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-09 00:56:13.972901 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-09 00:56:13.972906 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-09 00:56:13.972911 | orchestrator | 2026-04-09 00:56:13.972917 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-09 00:56:13.972922 | orchestrator | Thursday 09 April 2026 00:51:59 +0000 (0:00:06.059) 0:06:26.730 ******** 2026-04-09 00:56:13.972927 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-09 00:56:13.972932 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-09 00:56:13.972937 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-09 00:56:13.972942 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-09 00:56:13.972947 | orchestrator | 2026-04-09 00:56:13.972952 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:56:13.972957 | orchestrator | Thursday 09 April 2026 00:52:04 +0000 (0:00:04.695) 0:06:31.426 ******** 2026-04-09 00:56:13.972966 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.972972 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.972977 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.972982 | orchestrator | 2026-04-09 00:56:13.972987 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-09 00:56:13.972992 | orchestrator | Thursday 09 April 2026 00:52:04 +0000 (0:00:00.756) 0:06:32.183 ******** 2026-04-09 00:56:13.972997 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:13.973002 | orchestrator | 2026-04-09 00:56:13.973007 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-09 00:56:13.973012 | orchestrator | Thursday 09 April 2026 00:52:05 +0000 (0:00:00.512) 0:06:32.695 ******** 2026-04-09 00:56:13.973017 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.973022 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.973027 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.973032 | orchestrator | 2026-04-09 00:56:13.973038 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-09 00:56:13.973043 | orchestrator | Thursday 09 April 2026 00:52:05 +0000 (0:00:00.291) 0:06:32.986 ******** 2026-04-09 00:56:13.973049 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.973054 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.973059 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.973065 | orchestrator | 2026-04-09 00:56:13.973070 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-09 00:56:13.973075 | orchestrator | Thursday 09 April 2026 00:52:07 +0000 (0:00:01.350) 0:06:34.337 ******** 2026-04-09 00:56:13.973080 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:56:13.973085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:56:13.973091 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:56:13.973096 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.973101 | orchestrator | 2026-04-09 00:56:13.973106 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-09 00:56:13.973114 | orchestrator | Thursday 09 April 2026 00:52:07 +0000 (0:00:00.606) 0:06:34.944 ******** 2026-04-09 00:56:13.973119 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.973125 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.973130 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.973135 | orchestrator | 2026-04-09 00:56:13.973140 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-09 00:56:13.973145 | orchestrator | 2026-04-09 00:56:13.973150 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:56:13.973155 | orchestrator | Thursday 09 April 2026 00:52:08 +0000 (0:00:00.529) 0:06:35.474 ******** 2026-04-09 00:56:13.973160 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.973165 | orchestrator | 2026-04-09 00:56:13.973170 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:56:13.973175 | orchestrator | Thursday 09 April 2026 00:52:08 +0000 (0:00:00.603) 0:06:36.078 ******** 2026-04-09 00:56:13.973181 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.973186 | orchestrator | 2026-04-09 00:56:13.973207 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:56:13.973213 | orchestrator | Thursday 09 April 2026 00:52:09 +0000 (0:00:00.449) 0:06:36.527 ******** 2026-04-09 00:56:13.973219 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973224 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973229 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973235 | orchestrator | 2026-04-09 00:56:13.973240 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:56:13.973245 | orchestrator | Thursday 09 April 2026 00:52:09 +0000 (0:00:00.387) 0:06:36.914 ******** 2026-04-09 00:56:13.973253 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973258 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973263 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973268 | orchestrator | 2026-04-09 00:56:13.973273 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:56:13.973277 | orchestrator | Thursday 09 April 2026 00:52:10 +0000 (0:00:00.688) 0:06:37.603 ******** 2026-04-09 00:56:13.973283 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973288 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973292 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973297 | orchestrator | 2026-04-09 00:56:13.973302 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:56:13.973307 | orchestrator | Thursday 09 April 2026 00:52:11 +0000 (0:00:00.724) 0:06:38.327 ******** 2026-04-09 00:56:13.973312 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973317 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973322 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973328 | orchestrator | 2026-04-09 00:56:13.973333 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:56:13.973338 | orchestrator | Thursday 09 April 2026 00:52:11 +0000 (0:00:00.635) 0:06:38.963 ******** 2026-04-09 00:56:13.973343 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973349 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973354 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973359 | orchestrator | 2026-04-09 00:56:13.973364 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:56:13.973369 | orchestrator | Thursday 09 April 2026 00:52:12 +0000 (0:00:00.430) 0:06:39.394 ******** 2026-04-09 00:56:13.973374 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973379 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973384 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973389 | orchestrator | 2026-04-09 00:56:13.973394 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:56:13.973399 | orchestrator | Thursday 09 April 2026 00:52:12 +0000 (0:00:00.260) 0:06:39.654 ******** 2026-04-09 00:56:13.973403 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973407 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973412 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973416 | orchestrator | 2026-04-09 00:56:13.973421 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:56:13.973427 | orchestrator | Thursday 09 April 2026 00:52:12 +0000 (0:00:00.262) 0:06:39.916 ******** 2026-04-09 00:56:13.973432 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973437 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973442 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973447 | orchestrator | 2026-04-09 00:56:13.973453 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:56:13.973458 | orchestrator | Thursday 09 April 2026 00:52:13 +0000 (0:00:00.721) 0:06:40.637 ******** 2026-04-09 00:56:13.973463 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973468 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973473 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973478 | orchestrator | 2026-04-09 00:56:13.973483 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:56:13.973488 | orchestrator | Thursday 09 April 2026 00:52:14 +0000 (0:00:00.813) 0:06:41.451 ******** 2026-04-09 00:56:13.973493 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973498 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973503 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973508 | orchestrator | 2026-04-09 00:56:13.973514 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:56:13.973519 | orchestrator | Thursday 09 April 2026 00:52:14 +0000 (0:00:00.255) 0:06:41.706 ******** 2026-04-09 00:56:13.973524 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973533 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973538 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973543 | orchestrator | 2026-04-09 00:56:13.973548 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:56:13.973554 | orchestrator | Thursday 09 April 2026 00:52:14 +0000 (0:00:00.273) 0:06:41.981 ******** 2026-04-09 00:56:13.973559 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973564 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973569 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973574 | orchestrator | 2026-04-09 00:56:13.973582 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:56:13.973588 | orchestrator | Thursday 09 April 2026 00:52:15 +0000 (0:00:00.286) 0:06:42.267 ******** 2026-04-09 00:56:13.973594 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973599 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973604 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973609 | orchestrator | 2026-04-09 00:56:13.973614 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:56:13.973619 | orchestrator | Thursday 09 April 2026 00:52:15 +0000 (0:00:00.299) 0:06:42.566 ******** 2026-04-09 00:56:13.973624 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973630 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973635 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973640 | orchestrator | 2026-04-09 00:56:13.973645 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:56:13.973650 | orchestrator | Thursday 09 April 2026 00:52:15 +0000 (0:00:00.625) 0:06:43.191 ******** 2026-04-09 00:56:13.973655 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973660 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973665 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973670 | orchestrator | 2026-04-09 00:56:13.973675 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:56:13.973698 | orchestrator | Thursday 09 April 2026 00:52:16 +0000 (0:00:00.310) 0:06:43.501 ******** 2026-04-09 00:56:13.973704 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973709 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973715 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973719 | orchestrator | 2026-04-09 00:56:13.973724 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:56:13.973730 | orchestrator | Thursday 09 April 2026 00:52:16 +0000 (0:00:00.298) 0:06:43.800 ******** 2026-04-09 00:56:13.973735 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973740 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973745 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973750 | orchestrator | 2026-04-09 00:56:13.973756 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:56:13.973761 | orchestrator | Thursday 09 April 2026 00:52:16 +0000 (0:00:00.280) 0:06:44.081 ******** 2026-04-09 00:56:13.973766 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973771 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973776 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973781 | orchestrator | 2026-04-09 00:56:13.973786 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:56:13.973791 | orchestrator | Thursday 09 April 2026 00:52:17 +0000 (0:00:00.624) 0:06:44.705 ******** 2026-04-09 00:56:13.973797 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973802 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973807 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973812 | orchestrator | 2026-04-09 00:56:13.973818 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-09 00:56:13.973823 | orchestrator | Thursday 09 April 2026 00:52:17 +0000 (0:00:00.526) 0:06:45.232 ******** 2026-04-09 00:56:13.973828 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973843 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.973848 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973857 | orchestrator | 2026-04-09 00:56:13.973862 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-09 00:56:13.973867 | orchestrator | Thursday 09 April 2026 00:52:18 +0000 (0:00:00.323) 0:06:45.555 ******** 2026-04-09 00:56:13.973873 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:56:13.973879 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:56:13.973884 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:56:13.973889 | orchestrator | 2026-04-09 00:56:13.973894 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-09 00:56:13.973900 | orchestrator | Thursday 09 April 2026 00:52:19 +0000 (0:00:01.029) 0:06:46.584 ******** 2026-04-09 00:56:13.973905 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.973910 | orchestrator | 2026-04-09 00:56:13.973915 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-09 00:56:13.973920 | orchestrator | Thursday 09 April 2026 00:52:19 +0000 (0:00:00.456) 0:06:47.041 ******** 2026-04-09 00:56:13.973925 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973930 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973935 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973941 | orchestrator | 2026-04-09 00:56:13.973946 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-09 00:56:13.973951 | orchestrator | Thursday 09 April 2026 00:52:20 +0000 (0:00:00.249) 0:06:47.290 ******** 2026-04-09 00:56:13.973956 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.973961 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.973966 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.973971 | orchestrator | 2026-04-09 00:56:13.973976 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-09 00:56:13.973981 | orchestrator | Thursday 09 April 2026 00:52:20 +0000 (0:00:00.420) 0:06:47.711 ******** 2026-04-09 00:56:13.973986 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.973991 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.973995 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.974000 | orchestrator | 2026-04-09 00:56:13.974005 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-09 00:56:13.974011 | orchestrator | Thursday 09 April 2026 00:52:21 +0000 (0:00:00.610) 0:06:48.321 ******** 2026-04-09 00:56:13.974042 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.974048 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.974053 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.974058 | orchestrator | 2026-04-09 00:56:13.974063 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-09 00:56:13.974069 | orchestrator | Thursday 09 April 2026 00:52:21 +0000 (0:00:00.346) 0:06:48.667 ******** 2026-04-09 00:56:13.974078 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 00:56:13.974085 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 00:56:13.974090 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 00:56:13.974095 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 00:56:13.974101 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 00:56:13.974107 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 00:56:13.974112 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 00:56:13.974117 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 00:56:13.974123 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 00:56:13.974136 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 00:56:13.974142 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 00:56:13.974148 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 00:56:13.974153 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 00:56:13.974158 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 00:56:13.974164 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 00:56:13.974169 | orchestrator | 2026-04-09 00:56:13.974174 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-09 00:56:13.974180 | orchestrator | Thursday 09 April 2026 00:52:24 +0000 (0:00:03.284) 0:06:51.951 ******** 2026-04-09 00:56:13.974185 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.974191 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.974196 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.974201 | orchestrator | 2026-04-09 00:56:13.974206 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-09 00:56:13.974212 | orchestrator | Thursday 09 April 2026 00:52:25 +0000 (0:00:00.471) 0:06:52.423 ******** 2026-04-09 00:56:13.974218 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.974223 | orchestrator | 2026-04-09 00:56:13.974228 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-09 00:56:13.974234 | orchestrator | Thursday 09 April 2026 00:52:25 +0000 (0:00:00.452) 0:06:52.876 ******** 2026-04-09 00:56:13.974239 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 00:56:13.974245 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 00:56:13.974250 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 00:56:13.974256 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-09 00:56:13.974261 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-09 00:56:13.974266 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-09 00:56:13.974271 | orchestrator | 2026-04-09 00:56:13.974276 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-09 00:56:13.974281 | orchestrator | Thursday 09 April 2026 00:52:26 +0000 (0:00:01.064) 0:06:53.941 ******** 2026-04-09 00:56:13.974287 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.974292 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:56:13.974297 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:56:13.974302 | orchestrator | 2026-04-09 00:56:13.974308 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:56:13.974313 | orchestrator | Thursday 09 April 2026 00:52:28 +0000 (0:00:01.952) 0:06:55.893 ******** 2026-04-09 00:56:13.974319 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:56:13.974324 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 00:56:13.974329 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.974335 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:56:13.974340 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:56:13.974346 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.974351 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:56:13.974356 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 00:56:13.974362 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.974366 | orchestrator | 2026-04-09 00:56:13.974371 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-09 00:56:13.974376 | orchestrator | Thursday 09 April 2026 00:52:30 +0000 (0:00:01.390) 0:06:57.284 ******** 2026-04-09 00:56:13.974385 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:56:13.974389 | orchestrator | 2026-04-09 00:56:13.974394 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-09 00:56:13.974399 | orchestrator | Thursday 09 April 2026 00:52:31 +0000 (0:00:01.808) 0:06:59.093 ******** 2026-04-09 00:56:13.974404 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.974409 | orchestrator | 2026-04-09 00:56:13.974414 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-09 00:56:13.974419 | orchestrator | Thursday 09 April 2026 00:52:32 +0000 (0:00:00.503) 0:06:59.597 ******** 2026-04-09 00:56:13.974428 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0ecce907-b02d-5708-a2ce-6926a186870f', 'data_vg': 'ceph-0ecce907-b02d-5708-a2ce-6926a186870f'}) 2026-04-09 00:56:13.974434 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fa87c95d-d840-5309-8296-5c77234dd7e9', 'data_vg': 'ceph-fa87c95d-d840-5309-8296-5c77234dd7e9'}) 2026-04-09 00:56:13.974439 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e77990f9-27fa-58e8-a0b8-915245e923bd', 'data_vg': 'ceph-e77990f9-27fa-58e8-a0b8-915245e923bd'}) 2026-04-09 00:56:13.974445 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b063fe53-4e4e-551f-8a45-331436b07c8b', 'data_vg': 'ceph-b063fe53-4e4e-551f-8a45-331436b07c8b'}) 2026-04-09 00:56:13.974450 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6c03351d-b2bb-55a5-9b19-7d0118202256', 'data_vg': 'ceph-6c03351d-b2bb-55a5-9b19-7d0118202256'}) 2026-04-09 00:56:13.974463 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e4752f0c-8dc2-56ff-98d4-03c08b41fecd', 'data_vg': 'ceph-e4752f0c-8dc2-56ff-98d4-03c08b41fecd'}) 2026-04-09 00:56:13.974469 | orchestrator | 2026-04-09 00:56:13.974474 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-09 00:56:13.974480 | orchestrator | Thursday 09 April 2026 00:53:11 +0000 (0:00:39.430) 0:07:39.027 ******** 2026-04-09 00:56:13.974485 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.974491 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.974496 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.974501 | orchestrator | 2026-04-09 00:56:13.974507 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-09 00:56:13.974512 | orchestrator | Thursday 09 April 2026 00:53:12 +0000 (0:00:00.425) 0:07:39.452 ******** 2026-04-09 00:56:13.974517 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.974522 | orchestrator | 2026-04-09 00:56:13.974527 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-09 00:56:13.974533 | orchestrator | Thursday 09 April 2026 00:53:12 +0000 (0:00:00.449) 0:07:39.902 ******** 2026-04-09 00:56:13.974537 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.974543 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.974548 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.974553 | orchestrator | 2026-04-09 00:56:13.974558 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-09 00:56:13.974564 | orchestrator | Thursday 09 April 2026 00:53:13 +0000 (0:00:00.663) 0:07:40.566 ******** 2026-04-09 00:56:13.974569 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.974574 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.974579 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.974585 | orchestrator | 2026-04-09 00:56:13.974591 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-09 00:56:13.974597 | orchestrator | Thursday 09 April 2026 00:53:14 +0000 (0:00:01.633) 0:07:42.199 ******** 2026-04-09 00:56:13.974602 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.974608 | orchestrator | 2026-04-09 00:56:13.974617 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-09 00:56:13.974622 | orchestrator | Thursday 09 April 2026 00:53:15 +0000 (0:00:00.447) 0:07:42.647 ******** 2026-04-09 00:56:13.974628 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.974633 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.974638 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.974644 | orchestrator | 2026-04-09 00:56:13.974649 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-09 00:56:13.974654 | orchestrator | Thursday 09 April 2026 00:53:16 +0000 (0:00:01.159) 0:07:43.807 ******** 2026-04-09 00:56:13.974660 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.974665 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.974671 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.974676 | orchestrator | 2026-04-09 00:56:13.974681 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-09 00:56:13.974687 | orchestrator | Thursday 09 April 2026 00:53:17 +0000 (0:00:01.424) 0:07:45.231 ******** 2026-04-09 00:56:13.974692 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.974697 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.974703 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.974709 | orchestrator | 2026-04-09 00:56:13.974715 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-09 00:56:13.974720 | orchestrator | Thursday 09 April 2026 00:53:19 +0000 (0:00:01.785) 0:07:47.016 ******** 2026-04-09 00:56:13.974726 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.974731 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.974737 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.974742 | orchestrator | 2026-04-09 00:56:13.974747 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-09 00:56:13.974753 | orchestrator | Thursday 09 April 2026 00:53:20 +0000 (0:00:00.336) 0:07:47.353 ******** 2026-04-09 00:56:13.974758 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.974763 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.974768 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.974773 | orchestrator | 2026-04-09 00:56:13.974779 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-09 00:56:13.974784 | orchestrator | Thursday 09 April 2026 00:53:20 +0000 (0:00:00.325) 0:07:47.679 ******** 2026-04-09 00:56:13.974790 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-09 00:56:13.974795 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:56:13.974800 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-09 00:56:13.974806 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-09 00:56:13.974811 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-09 00:56:13.974820 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-09 00:56:13.974825 | orchestrator | 2026-04-09 00:56:13.974830 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-09 00:56:13.974865 | orchestrator | Thursday 09 April 2026 00:53:21 +0000 (0:00:01.309) 0:07:48.988 ******** 2026-04-09 00:56:13.974871 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-09 00:56:13.974876 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-09 00:56:13.974881 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-09 00:56:13.974886 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-09 00:56:13.974892 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-09 00:56:13.974897 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-09 00:56:13.974902 | orchestrator | 2026-04-09 00:56:13.974907 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-09 00:56:13.974912 | orchestrator | Thursday 09 April 2026 00:53:23 +0000 (0:00:02.253) 0:07:51.242 ******** 2026-04-09 00:56:13.974917 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-09 00:56:13.974922 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-09 00:56:13.974927 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-09 00:56:13.974939 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-09 00:56:13.974948 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-09 00:56:13.974954 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-09 00:56:13.974959 | orchestrator | 2026-04-09 00:56:13.974964 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-09 00:56:13.974970 | orchestrator | Thursday 09 April 2026 00:53:27 +0000 (0:00:03.756) 0:07:54.998 ******** 2026-04-09 00:56:13.974975 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.974980 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.974985 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:56:13.974990 | orchestrator | 2026-04-09 00:56:13.974995 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-09 00:56:13.975001 | orchestrator | Thursday 09 April 2026 00:53:29 +0000 (0:00:01.939) 0:07:56.938 ******** 2026-04-09 00:56:13.975006 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975011 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975017 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-09 00:56:13.975022 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:56:13.975027 | orchestrator | 2026-04-09 00:56:13.975032 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-09 00:56:13.975037 | orchestrator | Thursday 09 April 2026 00:53:42 +0000 (0:00:12.962) 0:08:09.901 ******** 2026-04-09 00:56:13.975042 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975047 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975052 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.975057 | orchestrator | 2026-04-09 00:56:13.975062 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:56:13.975068 | orchestrator | Thursday 09 April 2026 00:53:43 +0000 (0:00:00.812) 0:08:10.713 ******** 2026-04-09 00:56:13.975073 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975078 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975083 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.975088 | orchestrator | 2026-04-09 00:56:13.975093 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-09 00:56:13.975099 | orchestrator | Thursday 09 April 2026 00:53:44 +0000 (0:00:00.581) 0:08:11.295 ******** 2026-04-09 00:56:13.975104 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.975109 | orchestrator | 2026-04-09 00:56:13.975114 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-09 00:56:13.975119 | orchestrator | Thursday 09 April 2026 00:53:44 +0000 (0:00:00.481) 0:08:11.776 ******** 2026-04-09 00:56:13.975124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.975130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.975135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.975140 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975145 | orchestrator | 2026-04-09 00:56:13.975150 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-09 00:56:13.975155 | orchestrator | Thursday 09 April 2026 00:53:44 +0000 (0:00:00.394) 0:08:12.171 ******** 2026-04-09 00:56:13.975160 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975166 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975171 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.975176 | orchestrator | 2026-04-09 00:56:13.975182 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-09 00:56:13.975187 | orchestrator | Thursday 09 April 2026 00:53:45 +0000 (0:00:00.545) 0:08:12.716 ******** 2026-04-09 00:56:13.975192 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975197 | orchestrator | 2026-04-09 00:56:13.975202 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-09 00:56:13.975211 | orchestrator | Thursday 09 April 2026 00:53:45 +0000 (0:00:00.215) 0:08:12.931 ******** 2026-04-09 00:56:13.975216 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975221 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975227 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.975232 | orchestrator | 2026-04-09 00:56:13.975238 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-09 00:56:13.975243 | orchestrator | Thursday 09 April 2026 00:53:45 +0000 (0:00:00.309) 0:08:13.241 ******** 2026-04-09 00:56:13.975248 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975253 | orchestrator | 2026-04-09 00:56:13.975258 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-09 00:56:13.975263 | orchestrator | Thursday 09 April 2026 00:53:46 +0000 (0:00:00.270) 0:08:13.511 ******** 2026-04-09 00:56:13.975269 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975274 | orchestrator | 2026-04-09 00:56:13.975282 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-09 00:56:13.975287 | orchestrator | Thursday 09 April 2026 00:53:46 +0000 (0:00:00.225) 0:08:13.736 ******** 2026-04-09 00:56:13.975292 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975298 | orchestrator | 2026-04-09 00:56:13.975303 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-09 00:56:13.975308 | orchestrator | Thursday 09 April 2026 00:53:46 +0000 (0:00:00.116) 0:08:13.852 ******** 2026-04-09 00:56:13.975313 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975318 | orchestrator | 2026-04-09 00:56:13.975323 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-09 00:56:13.975328 | orchestrator | Thursday 09 April 2026 00:53:46 +0000 (0:00:00.206) 0:08:14.059 ******** 2026-04-09 00:56:13.975333 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975338 | orchestrator | 2026-04-09 00:56:13.975343 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-09 00:56:13.975348 | orchestrator | Thursday 09 April 2026 00:53:47 +0000 (0:00:00.213) 0:08:14.272 ******** 2026-04-09 00:56:13.975353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.975358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.975367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.975372 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975377 | orchestrator | 2026-04-09 00:56:13.975383 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-09 00:56:13.975388 | orchestrator | Thursday 09 April 2026 00:53:47 +0000 (0:00:00.902) 0:08:15.174 ******** 2026-04-09 00:56:13.975393 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975398 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975403 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.975409 | orchestrator | 2026-04-09 00:56:13.975414 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-09 00:56:13.975419 | orchestrator | Thursday 09 April 2026 00:53:48 +0000 (0:00:00.647) 0:08:15.822 ******** 2026-04-09 00:56:13.975424 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975429 | orchestrator | 2026-04-09 00:56:13.975434 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-09 00:56:13.975440 | orchestrator | Thursday 09 April 2026 00:53:48 +0000 (0:00:00.223) 0:08:16.046 ******** 2026-04-09 00:56:13.975445 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975450 | orchestrator | 2026-04-09 00:56:13.975455 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-09 00:56:13.975460 | orchestrator | 2026-04-09 00:56:13.975465 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:56:13.975471 | orchestrator | Thursday 09 April 2026 00:53:49 +0000 (0:00:00.652) 0:08:16.698 ******** 2026-04-09 00:56:13.975477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.975487 | orchestrator | 2026-04-09 00:56:13.975492 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:56:13.975497 | orchestrator | Thursday 09 April 2026 00:53:50 +0000 (0:00:01.197) 0:08:17.895 ******** 2026-04-09 00:56:13.975502 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.975507 | orchestrator | 2026-04-09 00:56:13.975512 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:56:13.975517 | orchestrator | Thursday 09 April 2026 00:53:51 +0000 (0:00:01.163) 0:08:19.059 ******** 2026-04-09 00:56:13.975522 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975528 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975533 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.975539 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.975544 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.975549 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.975554 | orchestrator | 2026-04-09 00:56:13.975559 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:56:13.975564 | orchestrator | Thursday 09 April 2026 00:53:52 +0000 (0:00:00.823) 0:08:19.883 ******** 2026-04-09 00:56:13.975569 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.975574 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.975579 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.975585 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.975590 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.975595 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.975600 | orchestrator | 2026-04-09 00:56:13.975605 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:56:13.975611 | orchestrator | Thursday 09 April 2026 00:53:53 +0000 (0:00:00.986) 0:08:20.870 ******** 2026-04-09 00:56:13.975616 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.975621 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.975626 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.975631 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.975636 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.975641 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.975646 | orchestrator | 2026-04-09 00:56:13.975651 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:56:13.975656 | orchestrator | Thursday 09 April 2026 00:53:55 +0000 (0:00:01.405) 0:08:22.275 ******** 2026-04-09 00:56:13.975662 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.975667 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.975672 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.975677 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.975682 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.975687 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.975693 | orchestrator | 2026-04-09 00:56:13.975698 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:56:13.975703 | orchestrator | Thursday 09 April 2026 00:53:56 +0000 (0:00:01.075) 0:08:23.351 ******** 2026-04-09 00:56:13.975709 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975717 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.975722 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975727 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.975732 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.975738 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.975743 | orchestrator | 2026-04-09 00:56:13.975748 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:56:13.975753 | orchestrator | Thursday 09 April 2026 00:53:57 +0000 (0:00:01.002) 0:08:24.354 ******** 2026-04-09 00:56:13.975758 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.975767 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.975772 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.975777 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975782 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975788 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.975793 | orchestrator | 2026-04-09 00:56:13.975798 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:56:13.975803 | orchestrator | Thursday 09 April 2026 00:53:57 +0000 (0:00:00.522) 0:08:24.876 ******** 2026-04-09 00:56:13.975808 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.975813 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.975818 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.975827 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975843 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975849 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.975854 | orchestrator | 2026-04-09 00:56:13.975859 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:56:13.975865 | orchestrator | Thursday 09 April 2026 00:53:58 +0000 (0:00:00.668) 0:08:25.545 ******** 2026-04-09 00:56:13.975870 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.975875 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.975881 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.975887 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.975892 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.975898 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.975903 | orchestrator | 2026-04-09 00:56:13.975909 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:56:13.975914 | orchestrator | Thursday 09 April 2026 00:53:59 +0000 (0:00:00.860) 0:08:26.405 ******** 2026-04-09 00:56:13.975920 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.975925 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.975930 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.975936 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.975941 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.975947 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.975952 | orchestrator | 2026-04-09 00:56:13.975958 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:56:13.975963 | orchestrator | Thursday 09 April 2026 00:54:00 +0000 (0:00:01.116) 0:08:27.522 ******** 2026-04-09 00:56:13.975969 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.975974 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.975980 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.975985 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.975991 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.975996 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.976001 | orchestrator | 2026-04-09 00:56:13.976007 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:56:13.976012 | orchestrator | Thursday 09 April 2026 00:54:00 +0000 (0:00:00.484) 0:08:28.006 ******** 2026-04-09 00:56:13.976017 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.976023 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.976028 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.976033 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.976039 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.976044 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.976049 | orchestrator | 2026-04-09 00:56:13.976054 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:56:13.976060 | orchestrator | Thursday 09 April 2026 00:54:01 +0000 (0:00:00.567) 0:08:28.574 ******** 2026-04-09 00:56:13.976066 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.976071 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.976076 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.976082 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.976087 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.976092 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.976101 | orchestrator | 2026-04-09 00:56:13.976107 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:56:13.976113 | orchestrator | Thursday 09 April 2026 00:54:02 +0000 (0:00:00.798) 0:08:29.373 ******** 2026-04-09 00:56:13.976128 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.976134 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.976144 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.976149 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.976154 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.976160 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.976165 | orchestrator | 2026-04-09 00:56:13.976170 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:56:13.976175 | orchestrator | Thursday 09 April 2026 00:54:02 +0000 (0:00:00.627) 0:08:30.000 ******** 2026-04-09 00:56:13.976181 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.976186 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.976191 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.976195 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.976200 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.976206 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.976211 | orchestrator | 2026-04-09 00:56:13.976217 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:56:13.976222 | orchestrator | Thursday 09 April 2026 00:54:03 +0000 (0:00:00.880) 0:08:30.881 ******** 2026-04-09 00:56:13.976227 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.976232 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.976237 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.976242 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.976247 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.976252 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.976257 | orchestrator | 2026-04-09 00:56:13.976263 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:56:13.976270 | orchestrator | Thursday 09 April 2026 00:54:04 +0000 (0:00:00.643) 0:08:31.525 ******** 2026-04-09 00:56:13.976275 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:13.976280 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:13.976285 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:13.976289 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.976294 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.976300 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.976305 | orchestrator | 2026-04-09 00:56:13.976310 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:56:13.976315 | orchestrator | Thursday 09 April 2026 00:54:05 +0000 (0:00:00.832) 0:08:32.357 ******** 2026-04-09 00:56:13.976320 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.976325 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.976331 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.976336 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.976341 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.976346 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.976351 | orchestrator | 2026-04-09 00:56:13.976357 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:56:13.976362 | orchestrator | Thursday 09 April 2026 00:54:05 +0000 (0:00:00.614) 0:08:32.972 ******** 2026-04-09 00:56:13.976367 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.976372 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.976377 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.976386 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.976392 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.976397 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.976402 | orchestrator | 2026-04-09 00:56:13.976407 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:56:13.976413 | orchestrator | Thursday 09 April 2026 00:54:06 +0000 (0:00:01.141) 0:08:34.114 ******** 2026-04-09 00:56:13.976422 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.976427 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.976432 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.976437 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.976443 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.976448 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.976453 | orchestrator | 2026-04-09 00:56:13.976458 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-09 00:56:13.976463 | orchestrator | Thursday 09 April 2026 00:54:08 +0000 (0:00:01.406) 0:08:35.520 ******** 2026-04-09 00:56:13.976468 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.976473 | orchestrator | 2026-04-09 00:56:13.976479 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-09 00:56:13.976484 | orchestrator | Thursday 09 April 2026 00:54:11 +0000 (0:00:03.128) 0:08:38.649 ******** 2026-04-09 00:56:13.976489 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.976494 | orchestrator | 2026-04-09 00:56:13.976499 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-09 00:56:13.976504 | orchestrator | Thursday 09 April 2026 00:54:12 +0000 (0:00:01.559) 0:08:40.208 ******** 2026-04-09 00:56:13.976509 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.976514 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.976519 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.976523 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.976528 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.976533 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.976538 | orchestrator | 2026-04-09 00:56:13.976543 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-09 00:56:13.976548 | orchestrator | Thursday 09 April 2026 00:54:14 +0000 (0:00:01.776) 0:08:41.984 ******** 2026-04-09 00:56:13.976553 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.976558 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.976563 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.976568 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.976573 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.976578 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.976582 | orchestrator | 2026-04-09 00:56:13.976587 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-09 00:56:13.976592 | orchestrator | Thursday 09 April 2026 00:54:15 +0000 (0:00:01.099) 0:08:43.084 ******** 2026-04-09 00:56:13.976597 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-04-09 00:56:13.976603 | orchestrator | 2026-04-09 00:56:13.976608 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-09 00:56:13.976614 | orchestrator | Thursday 09 April 2026 00:54:17 +0000 (0:00:01.178) 0:08:44.262 ******** 2026-04-09 00:56:13.976619 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.976624 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.976629 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.976634 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.976639 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.976644 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.976649 | orchestrator | 2026-04-09 00:56:13.976653 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-09 00:56:13.976659 | orchestrator | Thursday 09 April 2026 00:54:18 +0000 (0:00:01.771) 0:08:46.033 ******** 2026-04-09 00:56:13.976664 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.976669 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.976674 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.976679 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.976684 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.976689 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.976694 | orchestrator | 2026-04-09 00:56:13.976700 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-09 00:56:13.976709 | orchestrator | Thursday 09 April 2026 00:54:22 +0000 (0:00:03.826) 0:08:49.860 ******** 2026-04-09 00:56:13.976715 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.976720 | orchestrator | 2026-04-09 00:56:13.976725 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-09 00:56:13.976730 | orchestrator | Thursday 09 April 2026 00:54:24 +0000 (0:00:01.773) 0:08:51.634 ******** 2026-04-09 00:56:13.976739 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.976744 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.976749 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.976755 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.976760 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.976765 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.976770 | orchestrator | 2026-04-09 00:56:13.976775 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-09 00:56:13.976780 | orchestrator | Thursday 09 April 2026 00:54:25 +0000 (0:00:00.883) 0:08:52.517 ******** 2026-04-09 00:56:13.976785 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:13.976790 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.976795 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.976801 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:13.976806 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.976811 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:13.976816 | orchestrator | 2026-04-09 00:56:13.976821 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-09 00:56:13.976826 | orchestrator | Thursday 09 April 2026 00:54:27 +0000 (0:00:02.538) 0:08:55.056 ******** 2026-04-09 00:56:13.976841 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:13.976847 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:13.976853 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:13.976858 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.976867 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.976873 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.976878 | orchestrator | 2026-04-09 00:56:13.976883 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-09 00:56:13.976888 | orchestrator | 2026-04-09 00:56:13.976894 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:56:13.976899 | orchestrator | Thursday 09 April 2026 00:54:28 +0000 (0:00:01.121) 0:08:56.177 ******** 2026-04-09 00:56:13.976904 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.976909 | orchestrator | 2026-04-09 00:56:13.976913 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:56:13.976918 | orchestrator | Thursday 09 April 2026 00:54:29 +0000 (0:00:00.532) 0:08:56.710 ******** 2026-04-09 00:56:13.976922 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.976927 | orchestrator | 2026-04-09 00:56:13.976932 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:56:13.976937 | orchestrator | Thursday 09 April 2026 00:54:30 +0000 (0:00:00.801) 0:08:57.511 ******** 2026-04-09 00:56:13.976942 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.976948 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.976953 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.976958 | orchestrator | 2026-04-09 00:56:13.976963 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:56:13.976968 | orchestrator | Thursday 09 April 2026 00:54:30 +0000 (0:00:00.335) 0:08:57.847 ******** 2026-04-09 00:56:13.976973 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.976978 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.976983 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.976993 | orchestrator | 2026-04-09 00:56:13.976998 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:56:13.977003 | orchestrator | Thursday 09 April 2026 00:54:31 +0000 (0:00:00.687) 0:08:58.534 ******** 2026-04-09 00:56:13.977008 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.977013 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.977018 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.977023 | orchestrator | 2026-04-09 00:56:13.977028 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:56:13.977034 | orchestrator | Thursday 09 April 2026 00:54:32 +0000 (0:00:00.747) 0:08:59.281 ******** 2026-04-09 00:56:13.977039 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.977044 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.977049 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.977054 | orchestrator | 2026-04-09 00:56:13.977059 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:56:13.977064 | orchestrator | Thursday 09 April 2026 00:54:33 +0000 (0:00:00.980) 0:09:00.262 ******** 2026-04-09 00:56:13.977069 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.977075 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.977080 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.977086 | orchestrator | 2026-04-09 00:56:13.977091 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:56:13.977096 | orchestrator | Thursday 09 April 2026 00:54:33 +0000 (0:00:00.308) 0:09:00.570 ******** 2026-04-09 00:56:13.977101 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.977106 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.977112 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.977117 | orchestrator | 2026-04-09 00:56:13.977122 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:56:13.977127 | orchestrator | Thursday 09 April 2026 00:54:33 +0000 (0:00:00.282) 0:09:00.853 ******** 2026-04-09 00:56:13.977132 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.977137 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.977142 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.977148 | orchestrator | 2026-04-09 00:56:13.977153 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:56:13.977158 | orchestrator | Thursday 09 April 2026 00:54:33 +0000 (0:00:00.307) 0:09:01.161 ******** 2026-04-09 00:56:13.977163 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.977168 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.977174 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.977178 | orchestrator | 2026-04-09 00:56:13.977183 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:56:13.977189 | orchestrator | Thursday 09 April 2026 00:54:34 +0000 (0:00:00.998) 0:09:02.159 ******** 2026-04-09 00:56:13.977194 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.977199 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.977205 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.977210 | orchestrator | 2026-04-09 00:56:13.977215 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:56:13.977223 | orchestrator | Thursday 09 April 2026 00:54:35 +0000 (0:00:00.748) 0:09:02.908 ******** 2026-04-09 00:56:13.977228 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.977233 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.977238 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.977243 | orchestrator | 2026-04-09 00:56:13.977248 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:56:13.977254 | orchestrator | Thursday 09 April 2026 00:54:35 +0000 (0:00:00.289) 0:09:03.198 ******** 2026-04-09 00:56:13.977259 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.977264 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.977269 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.977274 | orchestrator | 2026-04-09 00:56:13.977279 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:56:13.977289 | orchestrator | Thursday 09 April 2026 00:54:36 +0000 (0:00:00.303) 0:09:03.501 ******** 2026-04-09 00:56:13.977295 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.977300 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.977305 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.977311 | orchestrator | 2026-04-09 00:56:13.977316 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:56:13.977321 | orchestrator | Thursday 09 April 2026 00:54:36 +0000 (0:00:00.744) 0:09:04.246 ******** 2026-04-09 00:56:13.977330 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.977335 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.977340 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.977346 | orchestrator | 2026-04-09 00:56:13.977351 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:56:13.977356 | orchestrator | Thursday 09 April 2026 00:54:37 +0000 (0:00:00.344) 0:09:04.590 ******** 2026-04-09 00:56:13.977362 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.977367 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.977372 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.977377 | orchestrator | 2026-04-09 00:56:13.977383 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:56:13.977388 | orchestrator | Thursday 09 April 2026 00:54:37 +0000 (0:00:00.332) 0:09:04.923 ******** 2026-04-09 00:56:13.977394 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.977399 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.977404 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.977409 | orchestrator | 2026-04-09 00:56:13.977414 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:56:13.977419 | orchestrator | Thursday 09 April 2026 00:54:37 +0000 (0:00:00.316) 0:09:05.239 ******** 2026-04-09 00:56:13.977424 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.977430 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.977435 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.977440 | orchestrator | 2026-04-09 00:56:13.977445 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:56:13.977450 | orchestrator | Thursday 09 April 2026 00:54:38 +0000 (0:00:00.634) 0:09:05.874 ******** 2026-04-09 00:56:13.977456 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.977461 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.977466 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.977471 | orchestrator | 2026-04-09 00:56:13.977476 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:56:13.977481 | orchestrator | Thursday 09 April 2026 00:54:38 +0000 (0:00:00.297) 0:09:06.171 ******** 2026-04-09 00:56:13.977487 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.977491 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.977496 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.977501 | orchestrator | 2026-04-09 00:56:13.977506 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:56:13.977512 | orchestrator | Thursday 09 April 2026 00:54:39 +0000 (0:00:00.338) 0:09:06.510 ******** 2026-04-09 00:56:13.977517 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.977522 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.977527 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.977532 | orchestrator | 2026-04-09 00:56:13.977538 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-09 00:56:13.977543 | orchestrator | Thursday 09 April 2026 00:54:40 +0000 (0:00:00.812) 0:09:07.322 ******** 2026-04-09 00:56:13.977548 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.977553 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.977558 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-09 00:56:13.977563 | orchestrator | 2026-04-09 00:56:13.977569 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-09 00:56:13.977574 | orchestrator | Thursday 09 April 2026 00:54:40 +0000 (0:00:00.424) 0:09:07.746 ******** 2026-04-09 00:56:13.977583 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:56:13.977589 | orchestrator | 2026-04-09 00:56:13.977594 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-09 00:56:13.977599 | orchestrator | Thursday 09 April 2026 00:54:42 +0000 (0:00:01.761) 0:09:09.508 ******** 2026-04-09 00:56:13.977605 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-09 00:56:13.977611 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.977616 | orchestrator | 2026-04-09 00:56:13.977621 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-09 00:56:13.977627 | orchestrator | Thursday 09 April 2026 00:54:42 +0000 (0:00:00.217) 0:09:09.725 ******** 2026-04-09 00:56:13.977633 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:56:13.977647 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:56:13.977653 | orchestrator | 2026-04-09 00:56:13.977658 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-09 00:56:13.977663 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:07.344) 0:09:17.069 ******** 2026-04-09 00:56:13.977668 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:56:13.977673 | orchestrator | 2026-04-09 00:56:13.977678 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-09 00:56:13.977683 | orchestrator | Thursday 09 April 2026 00:54:52 +0000 (0:00:02.410) 0:09:19.480 ******** 2026-04-09 00:56:13.977688 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.977694 | orchestrator | 2026-04-09 00:56:13.977699 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-09 00:56:13.977708 | orchestrator | Thursday 09 April 2026 00:54:52 +0000 (0:00:00.773) 0:09:20.254 ******** 2026-04-09 00:56:13.977713 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 00:56:13.977718 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 00:56:13.977723 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 00:56:13.977728 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-09 00:56:13.977733 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-09 00:56:13.977738 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-09 00:56:13.977743 | orchestrator | 2026-04-09 00:56:13.977748 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-09 00:56:13.977753 | orchestrator | Thursday 09 April 2026 00:54:53 +0000 (0:00:00.993) 0:09:21.247 ******** 2026-04-09 00:56:13.977759 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.977764 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:56:13.977769 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:56:13.977774 | orchestrator | 2026-04-09 00:56:13.977779 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:56:13.977785 | orchestrator | Thursday 09 April 2026 00:54:55 +0000 (0:00:01.695) 0:09:22.943 ******** 2026-04-09 00:56:13.977790 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:56:13.977824 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:56:13.977830 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.977866 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:56:13.977872 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 00:56:13.977877 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.977882 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:56:13.977887 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 00:56:13.977892 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.977897 | orchestrator | 2026-04-09 00:56:13.977902 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-09 00:56:13.977907 | orchestrator | Thursday 09 April 2026 00:54:56 +0000 (0:00:01.180) 0:09:24.124 ******** 2026-04-09 00:56:13.977912 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.977917 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.977922 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.977927 | orchestrator | 2026-04-09 00:56:13.977932 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-09 00:56:13.977937 | orchestrator | Thursday 09 April 2026 00:54:59 +0000 (0:00:02.472) 0:09:26.596 ******** 2026-04-09 00:56:13.977942 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.977947 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.977952 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.977957 | orchestrator | 2026-04-09 00:56:13.977962 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-09 00:56:13.977967 | orchestrator | Thursday 09 April 2026 00:54:59 +0000 (0:00:00.304) 0:09:26.901 ******** 2026-04-09 00:56:13.977973 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.977978 | orchestrator | 2026-04-09 00:56:13.977983 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-09 00:56:13.977988 | orchestrator | Thursday 09 April 2026 00:55:00 +0000 (0:00:00.517) 0:09:27.418 ******** 2026-04-09 00:56:13.977993 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.977998 | orchestrator | 2026-04-09 00:56:13.978003 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-09 00:56:13.978008 | orchestrator | Thursday 09 April 2026 00:55:00 +0000 (0:00:00.783) 0:09:28.201 ******** 2026-04-09 00:56:13.978041 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.978048 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.978053 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.978058 | orchestrator | 2026-04-09 00:56:13.978064 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-09 00:56:13.978069 | orchestrator | Thursday 09 April 2026 00:55:02 +0000 (0:00:01.255) 0:09:29.457 ******** 2026-04-09 00:56:13.978074 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.978080 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.978086 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.978091 | orchestrator | 2026-04-09 00:56:13.978096 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-09 00:56:13.978105 | orchestrator | Thursday 09 April 2026 00:55:03 +0000 (0:00:01.164) 0:09:30.621 ******** 2026-04-09 00:56:13.978111 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.978116 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.978122 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.978127 | orchestrator | 2026-04-09 00:56:13.978132 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-09 00:56:13.978138 | orchestrator | Thursday 09 April 2026 00:55:05 +0000 (0:00:02.155) 0:09:32.777 ******** 2026-04-09 00:56:13.978144 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.978149 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.978154 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.978164 | orchestrator | 2026-04-09 00:56:13.978169 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-09 00:56:13.978174 | orchestrator | Thursday 09 April 2026 00:55:07 +0000 (0:00:01.987) 0:09:34.765 ******** 2026-04-09 00:56:13.978180 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978185 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978190 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978196 | orchestrator | 2026-04-09 00:56:13.978201 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:56:13.978206 | orchestrator | Thursday 09 April 2026 00:55:08 +0000 (0:00:01.282) 0:09:36.047 ******** 2026-04-09 00:56:13.978217 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.978223 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.978228 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.978234 | orchestrator | 2026-04-09 00:56:13.978239 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-09 00:56:13.978244 | orchestrator | Thursday 09 April 2026 00:55:09 +0000 (0:00:00.657) 0:09:36.705 ******** 2026-04-09 00:56:13.978250 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.978255 | orchestrator | 2026-04-09 00:56:13.978261 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-09 00:56:13.978267 | orchestrator | Thursday 09 April 2026 00:55:09 +0000 (0:00:00.449) 0:09:37.155 ******** 2026-04-09 00:56:13.978272 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978278 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978283 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978288 | orchestrator | 2026-04-09 00:56:13.978294 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-09 00:56:13.978299 | orchestrator | Thursday 09 April 2026 00:55:10 +0000 (0:00:00.442) 0:09:37.597 ******** 2026-04-09 00:56:13.978304 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.978310 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.978315 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.978320 | orchestrator | 2026-04-09 00:56:13.978326 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-09 00:56:13.978331 | orchestrator | Thursday 09 April 2026 00:55:11 +0000 (0:00:01.123) 0:09:38.721 ******** 2026-04-09 00:56:13.978337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.978342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.978348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.978354 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.978359 | orchestrator | 2026-04-09 00:56:13.978364 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-09 00:56:13.978369 | orchestrator | Thursday 09 April 2026 00:55:12 +0000 (0:00:00.685) 0:09:39.407 ******** 2026-04-09 00:56:13.978375 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978380 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978386 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978391 | orchestrator | 2026-04-09 00:56:13.978397 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-09 00:56:13.978402 | orchestrator | 2026-04-09 00:56:13.978408 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:56:13.978413 | orchestrator | Thursday 09 April 2026 00:55:12 +0000 (0:00:00.531) 0:09:39.938 ******** 2026-04-09 00:56:13.978419 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.978424 | orchestrator | 2026-04-09 00:56:13.978430 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:56:13.978435 | orchestrator | Thursday 09 April 2026 00:55:13 +0000 (0:00:00.671) 0:09:40.609 ******** 2026-04-09 00:56:13.978440 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.978450 | orchestrator | 2026-04-09 00:56:13.978456 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:56:13.978461 | orchestrator | Thursday 09 April 2026 00:55:13 +0000 (0:00:00.453) 0:09:41.063 ******** 2026-04-09 00:56:13.978467 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.978472 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.978477 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.978482 | orchestrator | 2026-04-09 00:56:13.978488 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:56:13.978493 | orchestrator | Thursday 09 April 2026 00:55:14 +0000 (0:00:00.441) 0:09:41.504 ******** 2026-04-09 00:56:13.978499 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978504 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978510 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978515 | orchestrator | 2026-04-09 00:56:13.978520 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:56:13.978526 | orchestrator | Thursday 09 April 2026 00:55:14 +0000 (0:00:00.657) 0:09:42.161 ******** 2026-04-09 00:56:13.978531 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978536 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978542 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978547 | orchestrator | 2026-04-09 00:56:13.978552 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:56:13.978558 | orchestrator | Thursday 09 April 2026 00:55:15 +0000 (0:00:00.636) 0:09:42.797 ******** 2026-04-09 00:56:13.978563 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978572 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978578 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978583 | orchestrator | 2026-04-09 00:56:13.978589 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:56:13.978594 | orchestrator | Thursday 09 April 2026 00:55:16 +0000 (0:00:00.651) 0:09:43.449 ******** 2026-04-09 00:56:13.978599 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.978605 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.978610 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.978616 | orchestrator | 2026-04-09 00:56:13.978621 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:56:13.978626 | orchestrator | Thursday 09 April 2026 00:55:16 +0000 (0:00:00.460) 0:09:43.910 ******** 2026-04-09 00:56:13.978632 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.978637 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.978642 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.978648 | orchestrator | 2026-04-09 00:56:13.978653 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:56:13.978658 | orchestrator | Thursday 09 April 2026 00:55:16 +0000 (0:00:00.278) 0:09:44.189 ******** 2026-04-09 00:56:13.978664 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.978669 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.978678 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.978684 | orchestrator | 2026-04-09 00:56:13.978689 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:56:13.978694 | orchestrator | Thursday 09 April 2026 00:55:17 +0000 (0:00:00.364) 0:09:44.553 ******** 2026-04-09 00:56:13.978700 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978705 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978711 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978717 | orchestrator | 2026-04-09 00:56:13.978722 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:56:13.978728 | orchestrator | Thursday 09 April 2026 00:55:17 +0000 (0:00:00.629) 0:09:45.183 ******** 2026-04-09 00:56:13.978733 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978738 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978744 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978749 | orchestrator | 2026-04-09 00:56:13.978765 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:56:13.978770 | orchestrator | Thursday 09 April 2026 00:55:18 +0000 (0:00:01.072) 0:09:46.255 ******** 2026-04-09 00:56:13.978775 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.978780 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.978785 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.978790 | orchestrator | 2026-04-09 00:56:13.978795 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:56:13.978800 | orchestrator | Thursday 09 April 2026 00:55:19 +0000 (0:00:00.289) 0:09:46.545 ******** 2026-04-09 00:56:13.978805 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.978810 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.978815 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.978821 | orchestrator | 2026-04-09 00:56:13.978827 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:56:13.978843 | orchestrator | Thursday 09 April 2026 00:55:19 +0000 (0:00:00.317) 0:09:46.862 ******** 2026-04-09 00:56:13.978849 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978854 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978859 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978864 | orchestrator | 2026-04-09 00:56:13.978869 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:56:13.978874 | orchestrator | Thursday 09 April 2026 00:55:19 +0000 (0:00:00.345) 0:09:47.208 ******** 2026-04-09 00:56:13.978879 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978885 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978890 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978895 | orchestrator | 2026-04-09 00:56:13.978900 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:56:13.978905 | orchestrator | Thursday 09 April 2026 00:55:20 +0000 (0:00:00.640) 0:09:47.848 ******** 2026-04-09 00:56:13.978910 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.978915 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.978921 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.978926 | orchestrator | 2026-04-09 00:56:13.978931 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:56:13.978937 | orchestrator | Thursday 09 April 2026 00:55:20 +0000 (0:00:00.315) 0:09:48.164 ******** 2026-04-09 00:56:13.978942 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.978947 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.978952 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.978957 | orchestrator | 2026-04-09 00:56:13.978962 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:56:13.978967 | orchestrator | Thursday 09 April 2026 00:55:21 +0000 (0:00:00.313) 0:09:48.478 ******** 2026-04-09 00:56:13.978972 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.978978 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.978983 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.978988 | orchestrator | 2026-04-09 00:56:13.978993 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:56:13.978998 | orchestrator | Thursday 09 April 2026 00:55:21 +0000 (0:00:00.284) 0:09:48.762 ******** 2026-04-09 00:56:13.979003 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.979008 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.979013 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.979018 | orchestrator | 2026-04-09 00:56:13.979023 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:56:13.979028 | orchestrator | Thursday 09 April 2026 00:55:22 +0000 (0:00:00.542) 0:09:49.305 ******** 2026-04-09 00:56:13.979032 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.979038 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.979043 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.979048 | orchestrator | 2026-04-09 00:56:13.979054 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:56:13.979064 | orchestrator | Thursday 09 April 2026 00:55:22 +0000 (0:00:00.335) 0:09:49.640 ******** 2026-04-09 00:56:13.979069 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.979074 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.979079 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.979084 | orchestrator | 2026-04-09 00:56:13.979092 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-09 00:56:13.979098 | orchestrator | Thursday 09 April 2026 00:55:22 +0000 (0:00:00.565) 0:09:50.206 ******** 2026-04-09 00:56:13.979103 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.979108 | orchestrator | 2026-04-09 00:56:13.979113 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 00:56:13.979119 | orchestrator | Thursday 09 April 2026 00:55:23 +0000 (0:00:00.761) 0:09:50.968 ******** 2026-04-09 00:56:13.979124 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.979129 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:56:13.979134 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:56:13.979140 | orchestrator | 2026-04-09 00:56:13.979145 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:56:13.979150 | orchestrator | Thursday 09 April 2026 00:55:25 +0000 (0:00:01.660) 0:09:52.628 ******** 2026-04-09 00:56:13.979155 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:56:13.979165 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:56:13.979170 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.979175 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:56:13.979180 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 00:56:13.979185 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.979191 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:56:13.979196 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 00:56:13.979201 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.979206 | orchestrator | 2026-04-09 00:56:13.979211 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-09 00:56:13.979217 | orchestrator | Thursday 09 April 2026 00:55:26 +0000 (0:00:01.080) 0:09:53.709 ******** 2026-04-09 00:56:13.979222 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.979227 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.979232 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.979236 | orchestrator | 2026-04-09 00:56:13.979242 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-09 00:56:13.979247 | orchestrator | Thursday 09 April 2026 00:55:26 +0000 (0:00:00.308) 0:09:54.017 ******** 2026-04-09 00:56:13.979252 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.979258 | orchestrator | 2026-04-09 00:56:13.979263 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-09 00:56:13.979269 | orchestrator | Thursday 09 April 2026 00:55:27 +0000 (0:00:00.856) 0:09:54.874 ******** 2026-04-09 00:56:13.979274 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.979280 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.979286 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.979291 | orchestrator | 2026-04-09 00:56:13.979297 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-09 00:56:13.979302 | orchestrator | Thursday 09 April 2026 00:55:28 +0000 (0:00:00.833) 0:09:55.707 ******** 2026-04-09 00:56:13.979311 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.979317 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 00:56:13.979322 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.979328 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 00:56:13.979333 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.979338 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 00:56:13.979343 | orchestrator | 2026-04-09 00:56:13.979349 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 00:56:13.979354 | orchestrator | Thursday 09 April 2026 00:55:31 +0000 (0:00:03.306) 0:09:59.013 ******** 2026-04-09 00:56:13.979359 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.979364 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:56:13.979369 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.979374 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:56:13.979379 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:13.979384 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:56:13.979390 | orchestrator | 2026-04-09 00:56:13.979395 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:56:13.979400 | orchestrator | Thursday 09 April 2026 00:55:33 +0000 (0:00:01.970) 0:10:00.984 ******** 2026-04-09 00:56:13.979407 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:56:13.979412 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.979418 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:56:13.979423 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.979428 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:56:13.979433 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.979438 | orchestrator | 2026-04-09 00:56:13.979444 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-09 00:56:13.979449 | orchestrator | Thursday 09 April 2026 00:55:34 +0000 (0:00:01.041) 0:10:02.025 ******** 2026-04-09 00:56:13.979455 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-09 00:56:13.979460 | orchestrator | 2026-04-09 00:56:13.979465 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-09 00:56:13.979470 | orchestrator | Thursday 09 April 2026 00:55:34 +0000 (0:00:00.225) 0:10:02.251 ******** 2026-04-09 00:56:13.979475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:56:13.979484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:56:13.979489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:56:13.979495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:56:13.979500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:56:13.979505 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.979510 | orchestrator | 2026-04-09 00:56:13.979515 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-09 00:56:13.979525 | orchestrator | Thursday 09 April 2026 00:55:35 +0000 (0:00:00.820) 0:10:03.071 ******** 2026-04-09 00:56:13.979530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:56:13.979536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:56:13.979541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:56:13.979546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:56:13.979551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:56:13.979556 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.979561 | orchestrator | 2026-04-09 00:56:13.979567 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-09 00:56:13.979572 | orchestrator | Thursday 09 April 2026 00:55:36 +0000 (0:00:00.853) 0:10:03.924 ******** 2026-04-09 00:56:13.979577 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:56:13.979583 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:56:13.979588 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:56:13.979593 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:56:13.979598 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:56:13.979603 | orchestrator | 2026-04-09 00:56:13.979609 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-09 00:56:13.979614 | orchestrator | Thursday 09 April 2026 00:55:58 +0000 (0:00:22.063) 0:10:25.988 ******** 2026-04-09 00:56:13.979619 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.979624 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.979629 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.979635 | orchestrator | 2026-04-09 00:56:13.979640 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-09 00:56:13.979645 | orchestrator | Thursday 09 April 2026 00:55:59 +0000 (0:00:00.559) 0:10:26.547 ******** 2026-04-09 00:56:13.979651 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.979656 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.979661 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.979666 | orchestrator | 2026-04-09 00:56:13.979672 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-09 00:56:13.979677 | orchestrator | Thursday 09 April 2026 00:55:59 +0000 (0:00:00.289) 0:10:26.836 ******** 2026-04-09 00:56:13.979684 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.979690 | orchestrator | 2026-04-09 00:56:13.979695 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-09 00:56:13.979700 | orchestrator | Thursday 09 April 2026 00:56:00 +0000 (0:00:00.522) 0:10:27.359 ******** 2026-04-09 00:56:13.979705 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.979710 | orchestrator | 2026-04-09 00:56:13.979715 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-09 00:56:13.979724 | orchestrator | Thursday 09 April 2026 00:56:00 +0000 (0:00:00.765) 0:10:28.125 ******** 2026-04-09 00:56:13.979729 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.979734 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.979739 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.979744 | orchestrator | 2026-04-09 00:56:13.979750 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-09 00:56:13.979755 | orchestrator | Thursday 09 April 2026 00:56:02 +0000 (0:00:01.153) 0:10:29.279 ******** 2026-04-09 00:56:13.979760 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.979766 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.979771 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.979775 | orchestrator | 2026-04-09 00:56:13.979782 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-09 00:56:13.979787 | orchestrator | Thursday 09 April 2026 00:56:03 +0000 (0:00:01.113) 0:10:30.393 ******** 2026-04-09 00:56:13.979792 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:56:13.979797 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:56:13.979802 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:56:13.979807 | orchestrator | 2026-04-09 00:56:13.979811 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-09 00:56:13.979816 | orchestrator | Thursday 09 April 2026 00:56:05 +0000 (0:00:02.120) 0:10:32.513 ******** 2026-04-09 00:56:13.979820 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.979825 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.979830 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:56:13.979847 | orchestrator | 2026-04-09 00:56:13.979853 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:56:13.979858 | orchestrator | Thursday 09 April 2026 00:56:07 +0000 (0:00:02.325) 0:10:34.839 ******** 2026-04-09 00:56:13.979863 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.979869 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.979874 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.979879 | orchestrator | 2026-04-09 00:56:13.979884 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-09 00:56:13.979889 | orchestrator | Thursday 09 April 2026 00:56:08 +0000 (0:00:00.607) 0:10:35.446 ******** 2026-04-09 00:56:13.979894 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:13.979899 | orchestrator | 2026-04-09 00:56:13.979905 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-09 00:56:13.979910 | orchestrator | Thursday 09 April 2026 00:56:08 +0000 (0:00:00.503) 0:10:35.949 ******** 2026-04-09 00:56:13.979915 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.979920 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.979925 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.979930 | orchestrator | 2026-04-09 00:56:13.979935 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-09 00:56:13.979941 | orchestrator | Thursday 09 April 2026 00:56:09 +0000 (0:00:00.311) 0:10:36.261 ******** 2026-04-09 00:56:13.979945 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.979950 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:13.979954 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:13.979959 | orchestrator | 2026-04-09 00:56:13.979964 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-09 00:56:13.979970 | orchestrator | Thursday 09 April 2026 00:56:09 +0000 (0:00:00.563) 0:10:36.824 ******** 2026-04-09 00:56:13.979975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:13.979980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:13.979990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:13.979995 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:13.980000 | orchestrator | 2026-04-09 00:56:13.980005 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-09 00:56:13.980010 | orchestrator | Thursday 09 April 2026 00:56:10 +0000 (0:00:00.667) 0:10:37.492 ******** 2026-04-09 00:56:13.980016 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:13.980020 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:13.980026 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:13.980031 | orchestrator | 2026-04-09 00:56:13.980036 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:56:13.980041 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2026-04-09 00:56:13.980047 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-09 00:56:13.980052 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-09 00:56:13.980064 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2026-04-09 00:56:13.980069 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-09 00:56:13.980075 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-09 00:56:13.980080 | orchestrator | 2026-04-09 00:56:13.980085 | orchestrator | 2026-04-09 00:56:13.980091 | orchestrator | 2026-04-09 00:56:13.980096 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:56:13.980101 | orchestrator | Thursday 09 April 2026 00:56:10 +0000 (0:00:00.238) 0:10:37.731 ******** 2026-04-09 00:56:13.980106 | orchestrator | =============================================================================== 2026-04-09 00:56:13.980111 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 86.19s 2026-04-09 00:56:13.980116 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.43s 2026-04-09 00:56:13.980127 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 22.06s 2026-04-09 00:56:13.980133 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.56s 2026-04-09 00:56:13.980139 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.01s 2026-04-09 00:56:13.980144 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.96s 2026-04-09 00:56:13.980149 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 11.49s 2026-04-09 00:56:13.980154 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 7.91s 2026-04-09 00:56:13.980159 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.34s 2026-04-09 00:56:13.980164 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.69s 2026-04-09 00:56:13.980169 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.39s 2026-04-09 00:56:13.980174 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.06s 2026-04-09 00:56:13.980179 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.70s 2026-04-09 00:56:13.980185 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.30s 2026-04-09 00:56:13.980190 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.12s 2026-04-09 00:56:13.980195 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.83s 2026-04-09 00:56:13.980205 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.76s 2026-04-09 00:56:13.980210 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.31s 2026-04-09 00:56:13.980216 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.28s 2026-04-09 00:56:13.980221 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.13s 2026-04-09 00:56:13.980226 | orchestrator | 2026-04-09 00:56:13 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:13.980231 | orchestrator | 2026-04-09 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:17.011671 | orchestrator | 2026-04-09 00:56:17 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:56:17.013757 | orchestrator | 2026-04-09 00:56:17 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:17.015681 | orchestrator | 2026-04-09 00:56:17 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:17.015727 | orchestrator | 2026-04-09 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:20.064972 | orchestrator | 2026-04-09 00:56:20 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:56:20.066619 | orchestrator | 2026-04-09 00:56:20 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:20.068463 | orchestrator | 2026-04-09 00:56:20 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:20.068506 | orchestrator | 2026-04-09 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:23.113058 | orchestrator | 2026-04-09 00:56:23 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state STARTED 2026-04-09 00:56:23.114858 | orchestrator | 2026-04-09 00:56:23 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:23.118078 | orchestrator | 2026-04-09 00:56:23 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:23.118162 | orchestrator | 2026-04-09 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:26.154628 | orchestrator | 2026-04-09 00:56:26 | INFO  | Task d8481a91-c3eb-4281-ba05-08075fd24913 is in state SUCCESS 2026-04-09 00:56:26.155591 | orchestrator | 2026-04-09 00:56:26.155629 | orchestrator | 2026-04-09 00:56:26.155638 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:56:26.155645 | orchestrator | 2026-04-09 00:56:26.155667 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:56:26.155674 | orchestrator | Thursday 09 April 2026 00:53:50 +0000 (0:00:00.327) 0:00:00.327 ******** 2026-04-09 00:56:26.155681 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:26.155689 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:26.155695 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:26.155701 | orchestrator | 2026-04-09 00:56:26.155708 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:56:26.155714 | orchestrator | Thursday 09 April 2026 00:53:50 +0000 (0:00:00.320) 0:00:00.648 ******** 2026-04-09 00:56:26.155721 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-09 00:56:26.155727 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-09 00:56:26.155734 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-09 00:56:26.155741 | orchestrator | 2026-04-09 00:56:26.155747 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-09 00:56:26.155753 | orchestrator | 2026-04-09 00:56:26.155760 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:56:26.155766 | orchestrator | Thursday 09 April 2026 00:53:50 +0000 (0:00:00.308) 0:00:00.956 ******** 2026-04-09 00:56:26.155796 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:26.155803 | orchestrator | 2026-04-09 00:56:26.155810 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-09 00:56:26.155836 | orchestrator | Thursday 09 April 2026 00:53:51 +0000 (0:00:00.619) 0:00:01.575 ******** 2026-04-09 00:56:26.155842 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:56:26.155848 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:56:26.155854 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:56:26.155860 | orchestrator | 2026-04-09 00:56:26.155866 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-09 00:56:26.155873 | orchestrator | Thursday 09 April 2026 00:53:53 +0000 (0:00:01.957) 0:00:03.533 ******** 2026-04-09 00:56:26.155883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.155893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.155912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.155927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.155942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.155949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.155956 | orchestrator | 2026-04-09 00:56:26.155962 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:56:26.155972 | orchestrator | Thursday 09 April 2026 00:53:55 +0000 (0:00:01.684) 0:00:05.218 ******** 2026-04-09 00:56:26.155979 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:26.155986 | orchestrator | 2026-04-09 00:56:26.155992 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-09 00:56:26.155999 | orchestrator | Thursday 09 April 2026 00:53:55 +0000 (0:00:00.667) 0:00:05.885 ******** 2026-04-09 00:56:26.156014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.156026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.156032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.156040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.156117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.156131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.156144 | orchestrator | 2026-04-09 00:56:26.156151 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-09 00:56:26.156157 | orchestrator | Thursday 09 April 2026 00:53:59 +0000 (0:00:03.125) 0:00:09.011 ******** 2026-04-09 00:56:26.156164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 00:56:26.156171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 00:56:26.156178 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:26.156185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 00:56:26.156207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 00:56:26.156214 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:26.156221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 00:56:26.156228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 00:56:26.156235 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:26.156241 | orchestrator | 2026-04-09 00:56:26.156247 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-09 00:56:26.156253 | orchestrator | Thursday 09 April 2026 00:53:59 +0000 (0:00:00.710) 0:00:09.722 ******** 2026-04-09 00:56:26.156260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 00:56:26.156283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 00:56:26.156291 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:26.156298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 00:56:26.156305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 00:56:26.156312 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:26.156319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-09 00:56:26.156341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-09 00:56:26.156348 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:26.156356 | orchestrator | 2026-04-09 00:56:26.156362 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-09 00:56:26.156369 | orchestrator | Thursday 09 April 2026 00:54:00 +0000 (0:00:00.722) 0:00:10.444 ******** 2026-04-09 00:56:26.156375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.156383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.156392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.156417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.156425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.156448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.156463 | orchestrator | 2026-04-09 00:56:26.156470 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-09 00:56:26.156476 | orchestrator | Thursday 09 April 2026 00:54:03 +0000 (0:00:02.958) 0:00:13.402 ******** 2026-04-09 00:56:26.156483 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:26.156490 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:26.156496 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:26.156502 | orchestrator | 2026-04-09 00:56:26.156508 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-09 00:56:26.156515 | orchestrator | Thursday 09 April 2026 00:54:06 +0000 (0:00:02.806) 0:00:16.208 ******** 2026-04-09 00:56:26.156521 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:26.156534 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:26.156540 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:26.156547 | orchestrator | 2026-04-09 00:56:26.156553 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-09 00:56:26.156558 | orchestrator | Thursday 09 April 2026 00:54:08 +0000 (0:00:02.254) 0:00:18.462 ******** 2026-04-09 00:56:26.156565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.156580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.156588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-09 00:56:26.156596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.156602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.156623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-09 00:56:26.156630 | orchestrator | 2026-04-09 00:56:26.156636 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:56:26.156642 | orchestrator | Thursday 09 April 2026 00:54:10 +0000 (0:00:02.019) 0:00:20.482 ******** 2026-04-09 00:56:26.156648 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:26.156654 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:26.156660 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:26.156666 | orchestrator | 2026-04-09 00:56:26.156672 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 00:56:26.156678 | orchestrator | Thursday 09 April 2026 00:54:10 +0000 (0:00:00.290) 0:00:20.773 ******** 2026-04-09 00:56:26.156684 | orchestrator | 2026-04-09 00:56:26.156690 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 00:56:26.156696 | orchestrator | Thursday 09 April 2026 00:54:10 +0000 (0:00:00.048) 0:00:20.822 ******** 2026-04-09 00:56:26.156702 | orchestrator | 2026-04-09 00:56:26.156708 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 00:56:26.156714 | orchestrator | Thursday 09 April 2026 00:54:10 +0000 (0:00:00.047) 0:00:20.869 ******** 2026-04-09 00:56:26.156720 | orchestrator | 2026-04-09 00:56:26.156725 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-09 00:56:26.156731 | orchestrator | Thursday 09 April 2026 00:54:10 +0000 (0:00:00.048) 0:00:20.918 ******** 2026-04-09 00:56:26.156738 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:26.156743 | orchestrator | 2026-04-09 00:56:26.156749 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-09 00:56:26.156755 | orchestrator | Thursday 09 April 2026 00:54:11 +0000 (0:00:00.178) 0:00:21.096 ******** 2026-04-09 00:56:26.156760 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:26.156766 | orchestrator | 2026-04-09 00:56:26.156771 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-09 00:56:26.156784 | orchestrator | Thursday 09 April 2026 00:54:11 +0000 (0:00:00.174) 0:00:21.270 ******** 2026-04-09 00:56:26.156790 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:26.156796 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:26.156802 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:26.156808 | orchestrator | 2026-04-09 00:56:26.156814 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-09 00:56:26.156843 | orchestrator | Thursday 09 April 2026 00:55:10 +0000 (0:00:58.913) 0:01:20.184 ******** 2026-04-09 00:56:26.156850 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:26.156856 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:26.156862 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:26.156868 | orchestrator | 2026-04-09 00:56:26.156874 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:56:26.156880 | orchestrator | Thursday 09 April 2026 00:56:10 +0000 (0:01:00.111) 0:02:20.296 ******** 2026-04-09 00:56:26.156887 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:26.156893 | orchestrator | 2026-04-09 00:56:26.156900 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-09 00:56:26.156906 | orchestrator | Thursday 09 April 2026 00:56:10 +0000 (0:00:00.673) 0:02:20.969 ******** 2026-04-09 00:56:26.156913 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:26.156920 | orchestrator | 2026-04-09 00:56:26.156926 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-09 00:56:26.156932 | orchestrator | Thursday 09 April 2026 00:56:13 +0000 (0:00:02.363) 0:02:23.333 ******** 2026-04-09 00:56:26.156938 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:26.156944 | orchestrator | 2026-04-09 00:56:26.156951 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-09 00:56:26.156957 | orchestrator | Thursday 09 April 2026 00:56:15 +0000 (0:00:02.111) 0:02:25.445 ******** 2026-04-09 00:56:26.156962 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:26.156969 | orchestrator | 2026-04-09 00:56:26.156975 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-09 00:56:26.156981 | orchestrator | Thursday 09 April 2026 00:56:17 +0000 (0:00:02.233) 0:02:27.678 ******** 2026-04-09 00:56:26.156988 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:26.156994 | orchestrator | 2026-04-09 00:56:26.157000 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-09 00:56:26.157007 | orchestrator | Thursday 09 April 2026 00:56:20 +0000 (0:00:02.617) 0:02:30.296 ******** 2026-04-09 00:56:26.157013 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:26.157020 | orchestrator | 2026-04-09 00:56:26.157026 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:56:26.157034 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 00:56:26.157042 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:56:26.157054 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:56:26.157061 | orchestrator | 2026-04-09 00:56:26.157066 | orchestrator | 2026-04-09 00:56:26.157073 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:56:26.157079 | orchestrator | Thursday 09 April 2026 00:56:22 +0000 (0:00:02.640) 0:02:32.936 ******** 2026-04-09 00:56:26.157084 | orchestrator | =============================================================================== 2026-04-09 00:56:26.157090 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 60.11s 2026-04-09 00:56:26.157096 | orchestrator | opensearch : Restart opensearch container ------------------------------ 58.91s 2026-04-09 00:56:26.157102 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.13s 2026-04-09 00:56:26.157112 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.96s 2026-04-09 00:56:26.157118 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.81s 2026-04-09 00:56:26.157125 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.64s 2026-04-09 00:56:26.157131 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.62s 2026-04-09 00:56:26.157137 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.36s 2026-04-09 00:56:26.157143 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.25s 2026-04-09 00:56:26.157149 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.23s 2026-04-09 00:56:26.157156 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.11s 2026-04-09 00:56:26.157163 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.02s 2026-04-09 00:56:26.157170 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.96s 2026-04-09 00:56:26.157176 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.68s 2026-04-09 00:56:26.157182 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.72s 2026-04-09 00:56:26.157188 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.71s 2026-04-09 00:56:26.157194 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.67s 2026-04-09 00:56:26.157198 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.67s 2026-04-09 00:56:26.157202 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-04-09 00:56:26.157206 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-04-09 00:56:26.157210 | orchestrator | 2026-04-09 00:56:26 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:26.158126 | orchestrator | 2026-04-09 00:56:26 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:26.158169 | orchestrator | 2026-04-09 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:29.198333 | orchestrator | 2026-04-09 00:56:29 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:29.198580 | orchestrator | 2026-04-09 00:56:29 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:29.198602 | orchestrator | 2026-04-09 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:32.235225 | orchestrator | 2026-04-09 00:56:32 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:32.237926 | orchestrator | 2026-04-09 00:56:32 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:32.237973 | orchestrator | 2026-04-09 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:35.279327 | orchestrator | 2026-04-09 00:56:35 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:35.281874 | orchestrator | 2026-04-09 00:56:35 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:35.281934 | orchestrator | 2026-04-09 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:38.317904 | orchestrator | 2026-04-09 00:56:38 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:38.318088 | orchestrator | 2026-04-09 00:56:38 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state STARTED 2026-04-09 00:56:38.318502 | orchestrator | 2026-04-09 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:41.349049 | orchestrator | 2026-04-09 00:56:41 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:56:41.350680 | orchestrator | 2026-04-09 00:56:41 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:41.355067 | orchestrator | 2026-04-09 00:56:41.355160 | orchestrator | 2026-04-09 00:56:41.355169 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-09 00:56:41.355175 | orchestrator | 2026-04-09 00:56:41.355179 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-09 00:56:41.355184 | orchestrator | Thursday 09 April 2026 00:53:50 +0000 (0:00:00.095) 0:00:00.095 ******** 2026-04-09 00:56:41.355188 | orchestrator | ok: [localhost] => { 2026-04-09 00:56:41.355219 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-09 00:56:41.355224 | orchestrator | } 2026-04-09 00:56:41.355228 | orchestrator | 2026-04-09 00:56:41.355232 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-09 00:56:41.355236 | orchestrator | Thursday 09 April 2026 00:53:50 +0000 (0:00:00.036) 0:00:00.131 ******** 2026-04-09 00:56:41.355240 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-09 00:56:41.355246 | orchestrator | ...ignoring 2026-04-09 00:56:41.355250 | orchestrator | 2026-04-09 00:56:41.355254 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-09 00:56:41.355258 | orchestrator | Thursday 09 April 2026 00:53:53 +0000 (0:00:02.825) 0:00:02.957 ******** 2026-04-09 00:56:41.355262 | orchestrator | skipping: [localhost] 2026-04-09 00:56:41.355266 | orchestrator | 2026-04-09 00:56:41.355270 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-09 00:56:41.355274 | orchestrator | Thursday 09 April 2026 00:53:53 +0000 (0:00:00.050) 0:00:03.008 ******** 2026-04-09 00:56:41.355278 | orchestrator | ok: [localhost] 2026-04-09 00:56:41.355282 | orchestrator | 2026-04-09 00:56:41.355286 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:56:41.355290 | orchestrator | 2026-04-09 00:56:41.355293 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:56:41.355325 | orchestrator | Thursday 09 April 2026 00:53:53 +0000 (0:00:00.207) 0:00:03.216 ******** 2026-04-09 00:56:41.355330 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.355334 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:41.355338 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:41.355342 | orchestrator | 2026-04-09 00:56:41.355346 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:56:41.355350 | orchestrator | Thursday 09 April 2026 00:53:53 +0000 (0:00:00.292) 0:00:03.508 ******** 2026-04-09 00:56:41.355354 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-09 00:56:41.355358 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-09 00:56:41.355362 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-09 00:56:41.355366 | orchestrator | 2026-04-09 00:56:41.355370 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-09 00:56:41.355374 | orchestrator | 2026-04-09 00:56:41.355378 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-09 00:56:41.355382 | orchestrator | Thursday 09 April 2026 00:53:54 +0000 (0:00:00.554) 0:00:04.062 ******** 2026-04-09 00:56:41.355386 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:56:41.355390 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 00:56:41.355394 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 00:56:41.355398 | orchestrator | 2026-04-09 00:56:41.355402 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:56:41.355406 | orchestrator | Thursday 09 April 2026 00:53:54 +0000 (0:00:00.347) 0:00:04.410 ******** 2026-04-09 00:56:41.355410 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:41.355428 | orchestrator | 2026-04-09 00:56:41.355432 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-09 00:56:41.355513 | orchestrator | Thursday 09 April 2026 00:53:55 +0000 (0:00:00.596) 0:00:05.007 ******** 2026-04-09 00:56:41.355536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:56:41.355543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:56:41.355548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:56:41.355559 | orchestrator | 2026-04-09 00:56:41.355568 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-09 00:56:41.355572 | orchestrator | Thursday 09 April 2026 00:53:58 +0000 (0:00:03.487) 0:00:08.494 ******** 2026-04-09 00:56:41.355576 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.355580 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.355584 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.355588 | orchestrator | 2026-04-09 00:56:41.355592 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-09 00:56:41.355599 | orchestrator | Thursday 09 April 2026 00:53:59 +0000 (0:00:00.647) 0:00:09.141 ******** 2026-04-09 00:56:41.355603 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.355607 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.355611 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.355615 | orchestrator | 2026-04-09 00:56:41.355619 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-09 00:56:41.355623 | orchestrator | Thursday 09 April 2026 00:54:00 +0000 (0:00:01.572) 0:00:10.714 ******** 2026-04-09 00:56:41.355627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:56:41.355638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:56:41.355646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:56:41.355654 | orchestrator | 2026-04-09 00:56:41.355658 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-09 00:56:41.355662 | orchestrator | Thursday 09 April 2026 00:54:04 +0000 (0:00:03.933) 0:00:14.647 ******** 2026-04-09 00:56:41.355666 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.355670 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.355674 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.355678 | orchestrator | 2026-04-09 00:56:41.355682 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-09 00:56:41.355686 | orchestrator | Thursday 09 April 2026 00:54:05 +0000 (0:00:01.243) 0:00:15.890 ******** 2026-04-09 00:56:41.355690 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:41.355694 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.355698 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:41.355702 | orchestrator | 2026-04-09 00:56:41.355706 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:56:41.355710 | orchestrator | Thursday 09 April 2026 00:54:10 +0000 (0:00:04.556) 0:00:20.447 ******** 2026-04-09 00:56:41.355714 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:41.355718 | orchestrator | 2026-04-09 00:56:41.355722 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-09 00:56:41.355725 | orchestrator | Thursday 09 April 2026 00:54:10 +0000 (0:00:00.439) 0:00:20.886 ******** 2026-04-09 00:56:41.355736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:56:41.355741 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.355745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:56:41.355757 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.355765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:56:41.355772 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.355776 | orchestrator | 2026-04-09 00:56:41.355780 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-09 00:56:41.355785 | orchestrator | Thursday 09 April 2026 00:54:13 +0000 (0:00:03.036) 0:00:23.923 ******** 2026-04-09 00:56:41.355789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:56:41.355797 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.355844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:56:41.355849 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.355856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:56:41.355865 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.355869 | orchestrator | 2026-04-09 00:56:41.355874 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-09 00:56:41.355877 | orchestrator | Thursday 09 April 2026 00:54:15 +0000 (0:00:02.006) 0:00:25.930 ******** 2026-04-09 00:56:41.355881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:56:41.355886 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.355896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:56:41.355905 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.355909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:56:41.355914 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.355918 | orchestrator | 2026-04-09 00:56:41.355922 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-09 00:56:41.355926 | orchestrator | Thursday 09 April 2026 00:54:18 +0000 (0:00:02.618) 0:00:28.548 ******** 2026-04-09 00:56:41.355933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2026-04-09 00:56:41 | INFO  | Task a0362dc5-9c7b-4da1-ac0a-ba0ecb999370 is in state SUCCESS 2026-04-09 00:56:41.355941 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:56:41.355950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:56:41.355961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:56:41.355970 | orchestrator | 2026-04-09 00:56:41.355974 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-09 00:56:41.355978 | orchestrator | Thursday 09 April 2026 00:54:21 +0000 (0:00:03.232) 0:00:31.781 ******** 2026-04-09 00:56:41.355982 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.355986 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:41.355990 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:41.355994 | orchestrator | 2026-04-09 00:56:41.355998 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-09 00:56:41.356001 | orchestrator | Thursday 09 April 2026 00:54:22 +0000 (0:00:00.931) 0:00:32.712 ******** 2026-04-09 00:56:41.356005 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.356009 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:41.356013 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:41.356017 | orchestrator | 2026-04-09 00:56:41.356021 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-09 00:56:41.356025 | orchestrator | Thursday 09 April 2026 00:54:23 +0000 (0:00:00.332) 0:00:33.044 ******** 2026-04-09 00:56:41.356029 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.356033 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:41.356037 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:41.356040 | orchestrator | 2026-04-09 00:56:41.356044 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-09 00:56:41.356048 | orchestrator | Thursday 09 April 2026 00:54:23 +0000 (0:00:00.333) 0:00:33.378 ******** 2026-04-09 00:56:41.356053 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-09 00:56:41.356058 | orchestrator | ...ignoring 2026-04-09 00:56:41.356062 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-09 00:56:41.356066 | orchestrator | ...ignoring 2026-04-09 00:56:41.356070 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-09 00:56:41.356074 | orchestrator | ...ignoring 2026-04-09 00:56:41.356078 | orchestrator | 2026-04-09 00:56:41.356081 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-09 00:56:41.356085 | orchestrator | Thursday 09 April 2026 00:54:34 +0000 (0:00:11.042) 0:00:44.420 ******** 2026-04-09 00:56:41.356089 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.356093 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:41.356097 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:41.356101 | orchestrator | 2026-04-09 00:56:41.356105 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-09 00:56:41.356130 | orchestrator | Thursday 09 April 2026 00:54:34 +0000 (0:00:00.408) 0:00:44.829 ******** 2026-04-09 00:56:41.356134 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.356138 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356142 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356146 | orchestrator | 2026-04-09 00:56:41.356150 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-09 00:56:41.356158 | orchestrator | Thursday 09 April 2026 00:54:35 +0000 (0:00:00.416) 0:00:45.246 ******** 2026-04-09 00:56:41.356163 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.356168 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356172 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356176 | orchestrator | 2026-04-09 00:56:41.356181 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-09 00:56:41.356185 | orchestrator | Thursday 09 April 2026 00:54:35 +0000 (0:00:00.426) 0:00:45.673 ******** 2026-04-09 00:56:41.356190 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.356195 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356200 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356204 | orchestrator | 2026-04-09 00:56:41.356209 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-09 00:56:41.356214 | orchestrator | Thursday 09 April 2026 00:54:36 +0000 (0:00:00.620) 0:00:46.293 ******** 2026-04-09 00:56:41.356218 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.356223 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:41.356228 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:41.356232 | orchestrator | 2026-04-09 00:56:41.356239 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-09 00:56:41.356244 | orchestrator | Thursday 09 April 2026 00:54:36 +0000 (0:00:00.484) 0:00:46.778 ******** 2026-04-09 00:56:41.356249 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.356254 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356259 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356263 | orchestrator | 2026-04-09 00:56:41.356268 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:56:41.356276 | orchestrator | Thursday 09 April 2026 00:54:37 +0000 (0:00:00.444) 0:00:47.222 ******** 2026-04-09 00:56:41.356281 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356285 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356290 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-09 00:56:41.356295 | orchestrator | 2026-04-09 00:56:41.356299 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-09 00:56:41.356304 | orchestrator | Thursday 09 April 2026 00:54:37 +0000 (0:00:00.388) 0:00:47.611 ******** 2026-04-09 00:56:41.356308 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.356313 | orchestrator | 2026-04-09 00:56:41.356318 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-09 00:56:41.356323 | orchestrator | Thursday 09 April 2026 00:54:47 +0000 (0:00:10.094) 0:00:57.706 ******** 2026-04-09 00:56:41.356327 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.356332 | orchestrator | 2026-04-09 00:56:41.356336 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:56:41.356341 | orchestrator | Thursday 09 April 2026 00:54:48 +0000 (0:00:00.276) 0:00:57.982 ******** 2026-04-09 00:56:41.356345 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.356350 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356355 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356359 | orchestrator | 2026-04-09 00:56:41.356364 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-09 00:56:41.356369 | orchestrator | Thursday 09 April 2026 00:54:48 +0000 (0:00:00.819) 0:00:58.803 ******** 2026-04-09 00:56:41.356374 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.356378 | orchestrator | 2026-04-09 00:56:41.356383 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-09 00:56:41.356387 | orchestrator | Thursday 09 April 2026 00:54:56 +0000 (0:00:08.057) 0:01:06.860 ******** 2026-04-09 00:56:41.356392 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.356396 | orchestrator | 2026-04-09 00:56:41.356401 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-09 00:56:41.356406 | orchestrator | Thursday 09 April 2026 00:54:58 +0000 (0:00:01.579) 0:01:08.439 ******** 2026-04-09 00:56:41.356413 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.356418 | orchestrator | 2026-04-09 00:56:41.356423 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-09 00:56:41.356428 | orchestrator | Thursday 09 April 2026 00:55:01 +0000 (0:00:02.518) 0:01:10.958 ******** 2026-04-09 00:56:41.356433 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.356437 | orchestrator | 2026-04-09 00:56:41.356442 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-09 00:56:41.356447 | orchestrator | Thursday 09 April 2026 00:55:01 +0000 (0:00:00.235) 0:01:11.193 ******** 2026-04-09 00:56:41.356451 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.356455 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356459 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356463 | orchestrator | 2026-04-09 00:56:41.356467 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-09 00:56:41.356471 | orchestrator | Thursday 09 April 2026 00:55:01 +0000 (0:00:00.354) 0:01:11.548 ******** 2026-04-09 00:56:41.356475 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.356479 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:41.356483 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:41.356487 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-09 00:56:41.356491 | orchestrator | 2026-04-09 00:56:41.356495 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 00:56:41.356498 | orchestrator | skipping: no hosts matched 2026-04-09 00:56:41.356502 | orchestrator | 2026-04-09 00:56:41.356506 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 00:56:41.356510 | orchestrator | 2026-04-09 00:56:41.356514 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 00:56:41.356518 | orchestrator | Thursday 09 April 2026 00:55:01 +0000 (0:00:00.275) 0:01:11.823 ******** 2026-04-09 00:56:41.356522 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:56:41.356526 | orchestrator | 2026-04-09 00:56:41.356530 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 00:56:41.356534 | orchestrator | Thursday 09 April 2026 00:55:16 +0000 (0:00:15.006) 0:01:26.829 ******** 2026-04-09 00:56:41.356538 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:41.356542 | orchestrator | 2026-04-09 00:56:41.356555 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 00:56:41.356559 | orchestrator | Thursday 09 April 2026 00:55:31 +0000 (0:00:14.560) 0:01:41.390 ******** 2026-04-09 00:56:41.356568 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:41.356572 | orchestrator | 2026-04-09 00:56:41.356576 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 00:56:41.356580 | orchestrator | 2026-04-09 00:56:41.356584 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 00:56:41.356588 | orchestrator | Thursday 09 April 2026 00:55:33 +0000 (0:00:02.527) 0:01:43.917 ******** 2026-04-09 00:56:41.356592 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:56:41.356596 | orchestrator | 2026-04-09 00:56:41.356601 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 00:56:41.356608 | orchestrator | Thursday 09 April 2026 00:55:49 +0000 (0:00:15.430) 0:01:59.348 ******** 2026-04-09 00:56:41.356615 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:41.356621 | orchestrator | 2026-04-09 00:56:41.356627 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 00:56:41.356633 | orchestrator | Thursday 09 April 2026 00:56:04 +0000 (0:00:15.485) 0:02:14.834 ******** 2026-04-09 00:56:41.356644 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:41.356650 | orchestrator | 2026-04-09 00:56:41.356657 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-09 00:56:41.356662 | orchestrator | 2026-04-09 00:56:41.356668 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 00:56:41.356679 | orchestrator | Thursday 09 April 2026 00:56:07 +0000 (0:00:02.682) 0:02:17.516 ******** 2026-04-09 00:56:41.356685 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.356691 | orchestrator | 2026-04-09 00:56:41.356702 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-09 00:56:41.356708 | orchestrator | Thursday 09 April 2026 00:56:19 +0000 (0:00:11.751) 0:02:29.267 ******** 2026-04-09 00:56:41.356714 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.356720 | orchestrator | 2026-04-09 00:56:41.356726 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-09 00:56:41.356733 | orchestrator | Thursday 09 April 2026 00:56:23 +0000 (0:00:04.547) 0:02:33.814 ******** 2026-04-09 00:56:41.356740 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.356746 | orchestrator | 2026-04-09 00:56:41.356753 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-09 00:56:41.356759 | orchestrator | 2026-04-09 00:56:41.356765 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-09 00:56:41.356771 | orchestrator | Thursday 09 April 2026 00:56:26 +0000 (0:00:02.354) 0:02:36.169 ******** 2026-04-09 00:56:41.356787 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:56:41.356815 | orchestrator | 2026-04-09 00:56:41.356822 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-09 00:56:41.356826 | orchestrator | Thursday 09 April 2026 00:56:26 +0000 (0:00:00.583) 0:02:36.753 ******** 2026-04-09 00:56:41.356830 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356834 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356838 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.356842 | orchestrator | 2026-04-09 00:56:41.356846 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-09 00:56:41.356850 | orchestrator | Thursday 09 April 2026 00:56:29 +0000 (0:00:02.597) 0:02:39.350 ******** 2026-04-09 00:56:41.356854 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356858 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356862 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.356866 | orchestrator | 2026-04-09 00:56:41.356870 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-09 00:56:41.356874 | orchestrator | Thursday 09 April 2026 00:56:31 +0000 (0:00:02.492) 0:02:41.842 ******** 2026-04-09 00:56:41.356878 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356882 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356886 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.356890 | orchestrator | 2026-04-09 00:56:41.356894 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-09 00:56:41.356898 | orchestrator | Thursday 09 April 2026 00:56:33 +0000 (0:00:02.004) 0:02:43.847 ******** 2026-04-09 00:56:41.356902 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356905 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356909 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:56:41.356913 | orchestrator | 2026-04-09 00:56:41.356917 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-09 00:56:41.356921 | orchestrator | Thursday 09 April 2026 00:56:36 +0000 (0:00:02.627) 0:02:46.474 ******** 2026-04-09 00:56:41.356925 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:56:41.356929 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:56:41.356933 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:56:41.356937 | orchestrator | 2026-04-09 00:56:41.356941 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-09 00:56:41.356945 | orchestrator | Thursday 09 April 2026 00:56:39 +0000 (0:00:03.132) 0:02:49.607 ******** 2026-04-09 00:56:41.356949 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:56:41.356952 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:56:41.356956 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:56:41.356960 | orchestrator | 2026-04-09 00:56:41.356964 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:56:41.356972 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:56:41.356977 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-09 00:56:41.356983 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-09 00:56:41.356987 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-09 00:56:41.356991 | orchestrator | 2026-04-09 00:56:41.356995 | orchestrator | 2026-04-09 00:56:41.356999 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:56:41.357002 | orchestrator | Thursday 09 April 2026 00:56:39 +0000 (0:00:00.206) 0:02:49.814 ******** 2026-04-09 00:56:41.357006 | orchestrator | =============================================================================== 2026-04-09 00:56:41.357010 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 30.44s 2026-04-09 00:56:41.357014 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 30.05s 2026-04-09 00:56:41.357018 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.75s 2026-04-09 00:56:41.357022 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.04s 2026-04-09 00:56:41.357026 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.09s 2026-04-09 00:56:41.357034 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.06s 2026-04-09 00:56:41.357038 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.21s 2026-04-09 00:56:41.357042 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.56s 2026-04-09 00:56:41.357046 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.55s 2026-04-09 00:56:41.357052 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.93s 2026-04-09 00:56:41.357056 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.49s 2026-04-09 00:56:41.357060 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.23s 2026-04-09 00:56:41.357064 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.13s 2026-04-09 00:56:41.357068 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.04s 2026-04-09 00:56:41.357072 | orchestrator | Check MariaDB service --------------------------------------------------- 2.83s 2026-04-09 00:56:41.357076 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.63s 2026-04-09 00:56:41.357080 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.62s 2026-04-09 00:56:41.357083 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.60s 2026-04-09 00:56:41.357087 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.52s 2026-04-09 00:56:41.357091 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.49s 2026-04-09 00:56:41.357095 | orchestrator | 2026-04-09 00:56:41 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:56:41.357099 | orchestrator | 2026-04-09 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:44.388902 | orchestrator | 2026-04-09 00:56:44 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:56:44.388974 | orchestrator | 2026-04-09 00:56:44 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:44.389874 | orchestrator | 2026-04-09 00:56:44 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:56:44.389916 | orchestrator | 2026-04-09 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:47.428693 | orchestrator | 2026-04-09 00:56:47 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:56:47.429192 | orchestrator | 2026-04-09 00:56:47 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:47.430216 | orchestrator | 2026-04-09 00:56:47 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:56:47.430282 | orchestrator | 2026-04-09 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:50.468607 | orchestrator | 2026-04-09 00:56:50 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:56:50.470701 | orchestrator | 2026-04-09 00:56:50 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:50.472353 | orchestrator | 2026-04-09 00:56:50 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:56:50.472430 | orchestrator | 2026-04-09 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:53.513619 | orchestrator | 2026-04-09 00:56:53 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:56:53.515603 | orchestrator | 2026-04-09 00:56:53 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:53.517195 | orchestrator | 2026-04-09 00:56:53 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:56:53.517292 | orchestrator | 2026-04-09 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:56.553880 | orchestrator | 2026-04-09 00:56:56 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:56:56.554451 | orchestrator | 2026-04-09 00:56:56 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:56.555075 | orchestrator | 2026-04-09 00:56:56 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:56:56.556663 | orchestrator | 2026-04-09 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:59.587932 | orchestrator | 2026-04-09 00:56:59 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:56:59.589664 | orchestrator | 2026-04-09 00:56:59 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:56:59.589715 | orchestrator | 2026-04-09 00:56:59 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:56:59.589721 | orchestrator | 2026-04-09 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:02.632500 | orchestrator | 2026-04-09 00:57:02 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:02.633138 | orchestrator | 2026-04-09 00:57:02 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:02.634004 | orchestrator | 2026-04-09 00:57:02 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:02.634078 | orchestrator | 2026-04-09 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:05.672213 | orchestrator | 2026-04-09 00:57:05 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:05.673070 | orchestrator | 2026-04-09 00:57:05 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:05.674201 | orchestrator | 2026-04-09 00:57:05 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:05.674247 | orchestrator | 2026-04-09 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:08.703081 | orchestrator | 2026-04-09 00:57:08 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:08.704101 | orchestrator | 2026-04-09 00:57:08 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:08.705439 | orchestrator | 2026-04-09 00:57:08 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:08.705493 | orchestrator | 2026-04-09 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:11.757675 | orchestrator | 2026-04-09 00:57:11 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:11.759348 | orchestrator | 2026-04-09 00:57:11 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:11.761106 | orchestrator | 2026-04-09 00:57:11 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:11.761153 | orchestrator | 2026-04-09 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:14.801244 | orchestrator | 2026-04-09 00:57:14 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:14.802298 | orchestrator | 2026-04-09 00:57:14 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:14.806531 | orchestrator | 2026-04-09 00:57:14 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:14.806601 | orchestrator | 2026-04-09 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:17.851729 | orchestrator | 2026-04-09 00:57:17 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:17.854824 | orchestrator | 2026-04-09 00:57:17 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:17.857025 | orchestrator | 2026-04-09 00:57:17 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:17.857095 | orchestrator | 2026-04-09 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:20.904243 | orchestrator | 2026-04-09 00:57:20 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:20.906601 | orchestrator | 2026-04-09 00:57:20 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:20.910437 | orchestrator | 2026-04-09 00:57:20 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:20.910613 | orchestrator | 2026-04-09 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:23.958215 | orchestrator | 2026-04-09 00:57:23 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:23.958453 | orchestrator | 2026-04-09 00:57:23 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:23.959270 | orchestrator | 2026-04-09 00:57:23 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:23.959287 | orchestrator | 2026-04-09 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:27.006622 | orchestrator | 2026-04-09 00:57:27 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:27.007433 | orchestrator | 2026-04-09 00:57:27 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:27.008806 | orchestrator | 2026-04-09 00:57:27 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:27.008852 | orchestrator | 2026-04-09 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:30.052408 | orchestrator | 2026-04-09 00:57:30 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:30.053866 | orchestrator | 2026-04-09 00:57:30 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:30.055310 | orchestrator | 2026-04-09 00:57:30 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:30.055367 | orchestrator | 2026-04-09 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:33.093602 | orchestrator | 2026-04-09 00:57:33 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:33.095873 | orchestrator | 2026-04-09 00:57:33 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:33.097338 | orchestrator | 2026-04-09 00:57:33 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:33.097396 | orchestrator | 2026-04-09 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:36.138867 | orchestrator | 2026-04-09 00:57:36 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:36.143970 | orchestrator | 2026-04-09 00:57:36 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:36.144153 | orchestrator | 2026-04-09 00:57:36 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:36.144211 | orchestrator | 2026-04-09 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:39.191936 | orchestrator | 2026-04-09 00:57:39 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:39.194878 | orchestrator | 2026-04-09 00:57:39 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:39.197252 | orchestrator | 2026-04-09 00:57:39 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:39.197304 | orchestrator | 2026-04-09 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:42.245345 | orchestrator | 2026-04-09 00:57:42 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:42.246689 | orchestrator | 2026-04-09 00:57:42 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:42.248997 | orchestrator | 2026-04-09 00:57:42 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:42.249049 | orchestrator | 2026-04-09 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:45.289363 | orchestrator | 2026-04-09 00:57:45 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:45.291990 | orchestrator | 2026-04-09 00:57:45 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:45.293311 | orchestrator | 2026-04-09 00:57:45 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:45.293361 | orchestrator | 2026-04-09 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:48.342074 | orchestrator | 2026-04-09 00:57:48 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:48.344324 | orchestrator | 2026-04-09 00:57:48 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:48.346432 | orchestrator | 2026-04-09 00:57:48 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:48.346503 | orchestrator | 2026-04-09 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:51.387857 | orchestrator | 2026-04-09 00:57:51 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:51.389317 | orchestrator | 2026-04-09 00:57:51 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:51.391697 | orchestrator | 2026-04-09 00:57:51 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:51.391821 | orchestrator | 2026-04-09 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:54.438390 | orchestrator | 2026-04-09 00:57:54 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:54.439582 | orchestrator | 2026-04-09 00:57:54 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:54.442268 | orchestrator | 2026-04-09 00:57:54 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:54.442319 | orchestrator | 2026-04-09 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:57.494136 | orchestrator | 2026-04-09 00:57:57 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:57:57.495680 | orchestrator | 2026-04-09 00:57:57 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:57:57.498190 | orchestrator | 2026-04-09 00:57:57 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:57:57.498234 | orchestrator | 2026-04-09 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:00.542291 | orchestrator | 2026-04-09 00:58:00 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:58:00.542384 | orchestrator | 2026-04-09 00:58:00 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:58:00.543016 | orchestrator | 2026-04-09 00:58:00 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:00.543076 | orchestrator | 2026-04-09 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:03.584692 | orchestrator | 2026-04-09 00:58:03 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:58:03.586210 | orchestrator | 2026-04-09 00:58:03 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:58:03.588242 | orchestrator | 2026-04-09 00:58:03 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:03.588377 | orchestrator | 2026-04-09 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:06.621426 | orchestrator | 2026-04-09 00:58:06 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:58:06.623583 | orchestrator | 2026-04-09 00:58:06 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:58:06.625845 | orchestrator | 2026-04-09 00:58:06 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:06.625891 | orchestrator | 2026-04-09 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:09.667264 | orchestrator | 2026-04-09 00:58:09 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:58:09.670124 | orchestrator | 2026-04-09 00:58:09 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state STARTED 2026-04-09 00:58:09.672601 | orchestrator | 2026-04-09 00:58:09 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:09.673046 | orchestrator | 2026-04-09 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:12.725118 | orchestrator | 2026-04-09 00:58:12 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:58:12.728292 | orchestrator | 2026-04-09 00:58:12 | INFO  | Task c8bc31a9-f329-483e-a142-e255b81d2a01 is in state SUCCESS 2026-04-09 00:58:12.728359 | orchestrator | 2026-04-09 00:58:12.730421 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:58:12.730470 | orchestrator | 2.16.14 2026-04-09 00:58:12.730479 | orchestrator | 2026-04-09 00:58:12.730487 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-09 00:58:12.730495 | orchestrator | 2026-04-09 00:58:12.730502 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 00:58:12.730510 | orchestrator | Thursday 09 April 2026 00:56:15 +0000 (0:00:00.563) 0:00:00.563 ******** 2026-04-09 00:58:12.730516 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:58:12.730524 | orchestrator | 2026-04-09 00:58:12.730531 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 00:58:12.730538 | orchestrator | Thursday 09 April 2026 00:56:16 +0000 (0:00:00.674) 0:00:01.238 ******** 2026-04-09 00:58:12.730545 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.730552 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.730559 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.730565 | orchestrator | 2026-04-09 00:58:12.730572 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 00:58:12.730579 | orchestrator | Thursday 09 April 2026 00:56:17 +0000 (0:00:01.006) 0:00:02.244 ******** 2026-04-09 00:58:12.730585 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.730592 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.730599 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.730606 | orchestrator | 2026-04-09 00:58:12.730612 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 00:58:12.730619 | orchestrator | Thursday 09 April 2026 00:56:17 +0000 (0:00:00.288) 0:00:02.533 ******** 2026-04-09 00:58:12.730625 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.730801 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.730811 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.730818 | orchestrator | 2026-04-09 00:58:12.730824 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 00:58:12.730831 | orchestrator | Thursday 09 April 2026 00:56:18 +0000 (0:00:00.750) 0:00:03.283 ******** 2026-04-09 00:58:12.730838 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.730844 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.730851 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.730857 | orchestrator | 2026-04-09 00:58:12.731058 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 00:58:12.731065 | orchestrator | Thursday 09 April 2026 00:56:18 +0000 (0:00:00.290) 0:00:03.574 ******** 2026-04-09 00:58:12.731072 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.731078 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.731085 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.731092 | orchestrator | 2026-04-09 00:58:12.731098 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 00:58:12.731118 | orchestrator | Thursday 09 April 2026 00:56:18 +0000 (0:00:00.278) 0:00:03.852 ******** 2026-04-09 00:58:12.731125 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.731132 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.731138 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.731145 | orchestrator | 2026-04-09 00:58:12.731151 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 00:58:12.731158 | orchestrator | Thursday 09 April 2026 00:56:19 +0000 (0:00:00.283) 0:00:04.135 ******** 2026-04-09 00:58:12.731165 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.731173 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.731180 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.731186 | orchestrator | 2026-04-09 00:58:12.731193 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 00:58:12.731199 | orchestrator | Thursday 09 April 2026 00:56:19 +0000 (0:00:00.450) 0:00:04.586 ******** 2026-04-09 00:58:12.731218 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.731225 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.731232 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.731238 | orchestrator | 2026-04-09 00:58:12.731245 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 00:58:12.731252 | orchestrator | Thursday 09 April 2026 00:56:19 +0000 (0:00:00.268) 0:00:04.855 ******** 2026-04-09 00:58:12.731258 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:58:12.731265 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:58:12.731272 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:58:12.731279 | orchestrator | 2026-04-09 00:58:12.731315 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 00:58:12.731361 | orchestrator | Thursday 09 April 2026 00:56:20 +0000 (0:00:00.575) 0:00:05.431 ******** 2026-04-09 00:58:12.731369 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.731375 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.731382 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.731389 | orchestrator | 2026-04-09 00:58:12.731395 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 00:58:12.731402 | orchestrator | Thursday 09 April 2026 00:56:20 +0000 (0:00:00.398) 0:00:05.829 ******** 2026-04-09 00:58:12.731409 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:58:12.731416 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:58:12.731423 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:58:12.731430 | orchestrator | 2026-04-09 00:58:12.731436 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 00:58:12.731442 | orchestrator | Thursday 09 April 2026 00:56:23 +0000 (0:00:02.997) 0:00:08.827 ******** 2026-04-09 00:58:12.731449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:58:12.731456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:58:12.731463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:58:12.731469 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.731476 | orchestrator | 2026-04-09 00:58:12.731493 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 00:58:12.731500 | orchestrator | Thursday 09 April 2026 00:56:24 +0000 (0:00:00.383) 0:00:09.211 ******** 2026-04-09 00:58:12.731509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.731518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.731525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.731532 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.731538 | orchestrator | 2026-04-09 00:58:12.731545 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 00:58:12.731552 | orchestrator | Thursday 09 April 2026 00:56:24 +0000 (0:00:00.680) 0:00:09.892 ******** 2026-04-09 00:58:12.731561 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.731579 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.731587 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.731593 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.731600 | orchestrator | 2026-04-09 00:58:12.731607 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 00:58:12.731613 | orchestrator | Thursday 09 April 2026 00:56:24 +0000 (0:00:00.146) 0:00:10.038 ******** 2026-04-09 00:58:12.731621 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd7072f158443', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 00:56:21.601843', 'end': '2026-04-09 00:56:21.630811', 'delta': '0:00:00.028968', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d7072f158443'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 00:58:12.731631 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8268d142fa68', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 00:56:22.654375', 'end': '2026-04-09 00:56:22.692330', 'delta': '0:00:00.037955', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8268d142fa68'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 00:58:12.731646 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '36d7517b2784', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 00:56:23.525852', 'end': '2026-04-09 00:56:23.568834', 'delta': '0:00:00.042982', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['36d7517b2784'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 00:58:12.731653 | orchestrator | 2026-04-09 00:58:12.731660 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 00:58:12.731667 | orchestrator | Thursday 09 April 2026 00:56:25 +0000 (0:00:00.344) 0:00:10.382 ******** 2026-04-09 00:58:12.731674 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.731681 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.731897 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.731905 | orchestrator | 2026-04-09 00:58:12.731912 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 00:58:12.731919 | orchestrator | Thursday 09 April 2026 00:56:25 +0000 (0:00:00.428) 0:00:10.811 ******** 2026-04-09 00:58:12.731925 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-09 00:58:12.731933 | orchestrator | 2026-04-09 00:58:12.731940 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 00:58:12.731946 | orchestrator | Thursday 09 April 2026 00:56:27 +0000 (0:00:01.319) 0:00:12.131 ******** 2026-04-09 00:58:12.731953 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.731979 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.731985 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.731992 | orchestrator | 2026-04-09 00:58:12.731999 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 00:58:12.732005 | orchestrator | Thursday 09 April 2026 00:56:27 +0000 (0:00:00.273) 0:00:12.404 ******** 2026-04-09 00:58:12.732012 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732019 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.732025 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.732032 | orchestrator | 2026-04-09 00:58:12.732039 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:58:12.732049 | orchestrator | Thursday 09 April 2026 00:56:27 +0000 (0:00:00.381) 0:00:12.786 ******** 2026-04-09 00:58:12.732056 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732062 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.732069 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.732075 | orchestrator | 2026-04-09 00:58:12.732082 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 00:58:12.732088 | orchestrator | Thursday 09 April 2026 00:56:28 +0000 (0:00:00.444) 0:00:13.231 ******** 2026-04-09 00:58:12.732095 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.732102 | orchestrator | 2026-04-09 00:58:12.732108 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 00:58:12.732114 | orchestrator | Thursday 09 April 2026 00:56:28 +0000 (0:00:00.135) 0:00:13.366 ******** 2026-04-09 00:58:12.732120 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732127 | orchestrator | 2026-04-09 00:58:12.732134 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:58:12.732141 | orchestrator | Thursday 09 April 2026 00:56:28 +0000 (0:00:00.213) 0:00:13.579 ******** 2026-04-09 00:58:12.732147 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732154 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.732160 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.732166 | orchestrator | 2026-04-09 00:58:12.732173 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 00:58:12.732179 | orchestrator | Thursday 09 April 2026 00:56:28 +0000 (0:00:00.263) 0:00:13.843 ******** 2026-04-09 00:58:12.732186 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732192 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.732199 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.732206 | orchestrator | 2026-04-09 00:58:12.732213 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 00:58:12.732220 | orchestrator | Thursday 09 April 2026 00:56:29 +0000 (0:00:00.293) 0:00:14.136 ******** 2026-04-09 00:58:12.732226 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732233 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.732240 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.732247 | orchestrator | 2026-04-09 00:58:12.732254 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 00:58:12.732260 | orchestrator | Thursday 09 April 2026 00:56:29 +0000 (0:00:00.473) 0:00:14.610 ******** 2026-04-09 00:58:12.732267 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732278 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.732285 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.732291 | orchestrator | 2026-04-09 00:58:12.732298 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 00:58:12.732304 | orchestrator | Thursday 09 April 2026 00:56:29 +0000 (0:00:00.321) 0:00:14.932 ******** 2026-04-09 00:58:12.732309 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732315 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.732321 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.732327 | orchestrator | 2026-04-09 00:58:12.732334 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 00:58:12.732339 | orchestrator | Thursday 09 April 2026 00:56:30 +0000 (0:00:00.308) 0:00:15.240 ******** 2026-04-09 00:58:12.732345 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732351 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.732358 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.732364 | orchestrator | 2026-04-09 00:58:12.732395 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 00:58:12.732402 | orchestrator | Thursday 09 April 2026 00:56:30 +0000 (0:00:00.330) 0:00:15.571 ******** 2026-04-09 00:58:12.732409 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732416 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.732422 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.732429 | orchestrator | 2026-04-09 00:58:12.732435 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 00:58:12.732442 | orchestrator | Thursday 09 April 2026 00:56:30 +0000 (0:00:00.445) 0:00:16.016 ******** 2026-04-09 00:58:12.732450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0ecce907--b02d--5708--a2ce--6926a186870f-osd--block--0ecce907--b02d--5708--a2ce--6926a186870f', 'dm-uuid-LVM-MwHp97WqxiAKjrPzM1rqjGGR9t0YLZOSpWqFrFJnKVC7KVZHoIWS487LC2ojJlF4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b063fe53--4e4e--551f--8a45--331436b07c8b-osd--block--b063fe53--4e4e--551f--8a45--331436b07c8b', 'dm-uuid-LVM-oidJBbkg2nUZzbFblhIzA8HRXCMWuowncfNejT9B0KxURmhyfyY4upG4oDblHBtU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.732611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa87c95d--d840--5309--8296--5c77234dd7e9-osd--block--fa87c95d--d840--5309--8296--5c77234dd7e9', 'dm-uuid-LVM-XysAVcqS16jjDfkbWOU4ZClUSjuwTp81wvaiLa0cF3uDbvQpuSYWqCzba7pjNHyh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0ecce907--b02d--5708--a2ce--6926a186870f-osd--block--0ecce907--b02d--5708--a2ce--6926a186870f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uiJAMo-y50f-8GAZ-AMdd-NNz0-bt1F-FslSBh', 'scsi-0QEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7', 'scsi-SQEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.732646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b063fe53--4e4e--551f--8a45--331436b07c8b-osd--block--b063fe53--4e4e--551f--8a45--331436b07c8b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-99esnd-k3Yc-WLEz-KCyI-RcuL-Idv2-dz5HD0', 'scsi-0QEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1', 'scsi-SQEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.732653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4752f0c--8dc2--56ff--98d4--03c08b41fecd-osd--block--e4752f0c--8dc2--56ff--98d4--03c08b41fecd', 'dm-uuid-LVM-soYDyklFCUZiaWxHAKv86XAnIxxqPtyqUp0438blgymvNmn5pe4IrJcTciV01Wa3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a', 'scsi-SQEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.732741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.732758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732836 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.732848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.732857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fa87c95d--d840--5309--8296--5c77234dd7e9-osd--block--fa87c95d--d840--5309--8296--5c77234dd7e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Fu4yKb-0Kk3-b0rT-0P6A-kNfI-wm1i-82Giss', 'scsi-0QEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b', 'scsi-SQEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.732864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e4752f0c--8dc2--56ff--98d4--03c08b41fecd-osd--block--e4752f0c--8dc2--56ff--98d4--03c08b41fecd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZTDjeK-2HwQ-AeGM-7YlK-G32T-0cCX-cAtmDf', 'scsi-0QEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48', 'scsi-SQEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.732873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec', 'scsi-SQEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.732885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.732892 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.732899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e77990f9--27fa--58e8--a0b8--915245e923bd-osd--block--e77990f9--27fa--58e8--a0b8--915245e923bd', 'dm-uuid-LVM-uDhV6caHL211nYXtqSdoo3op85zXuT4LC4DceUxfDV1jL83Kf3awHkqz08dj0ZJi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6c03351d--b2bb--55a5--9b19--7d0118202256-osd--block--6c03351d--b2bb--55a5--9b19--7d0118202256', 'dm-uuid-LVM-4S4msUcaLagRA6mssTeNi6WZstmM0v6Px8dOP78xYQMY6K7swxucxsNeXVpx2NZm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:58:12.732987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part1', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part14', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part15', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part16', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.733002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e77990f9--27fa--58e8--a0b8--915245e923bd-osd--block--e77990f9--27fa--58e8--a0b8--915245e923bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3EmKSY-AkaE-j80P-5gp5-pF4R-nSaf-PH5I5E', 'scsi-0QEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3', 'scsi-SQEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.733009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6c03351d--b2bb--55a5--9b19--7d0118202256-osd--block--6c03351d--b2bb--55a5--9b19--7d0118202256'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HedMRi-I0PG-09i6-bTXb-lsmq-3ePu-22hUR5', 'scsi-0QEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645', 'scsi-SQEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.733016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47', 'scsi-SQEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.733028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:58:12.733035 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.733042 | orchestrator | 2026-04-09 00:58:12.733049 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 00:58:12.733056 | orchestrator | Thursday 09 April 2026 00:56:31 +0000 (0:00:00.570) 0:00:16.587 ******** 2026-04-09 00:58:12.733063 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0ecce907--b02d--5708--a2ce--6926a186870f-osd--block--0ecce907--b02d--5708--a2ce--6926a186870f', 'dm-uuid-LVM-MwHp97WqxiAKjrPzM1rqjGGR9t0YLZOSpWqFrFJnKVC7KVZHoIWS487LC2ojJlF4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733074 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b063fe53--4e4e--551f--8a45--331436b07c8b-osd--block--b063fe53--4e4e--551f--8a45--331436b07c8b', 'dm-uuid-LVM-oidJBbkg2nUZzbFblhIzA8HRXCMWuowncfNejT9B0KxURmhyfyY4upG4oDblHBtU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733086 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733112 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733141 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733148 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2746891-05dc-4fd7-9896-5ab09f2729dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733173 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa87c95d--d840--5309--8296--5c77234dd7e9-osd--block--fa87c95d--d840--5309--8296--5c77234dd7e9', 'dm-uuid-LVM-XysAVcqS16jjDfkbWOU4ZClUSjuwTp81wvaiLa0cF3uDbvQpuSYWqCzba7pjNHyh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0ecce907--b02d--5708--a2ce--6926a186870f-osd--block--0ecce907--b02d--5708--a2ce--6926a186870f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uiJAMo-y50f-8GAZ-AMdd-NNz0-bt1F-FslSBh', 'scsi-0QEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7', 'scsi-SQEMU_QEMU_HARDDISK_6ddb4c8f-ad36-4043-a4c5-c841e18226a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733191 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4752f0c--8dc2--56ff--98d4--03c08b41fecd-osd--block--e4752f0c--8dc2--56ff--98d4--03c08b41fecd', 'dm-uuid-LVM-soYDyklFCUZiaWxHAKv86XAnIxxqPtyqUp0438blgymvNmn5pe4IrJcTciV01Wa3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b063fe53--4e4e--551f--8a45--331436b07c8b-osd--block--b063fe53--4e4e--551f--8a45--331436b07c8b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-99esnd-k3Yc-WLEz-KCyI-RcuL-Idv2-dz5HD0', 'scsi-0QEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1', 'scsi-SQEMU_QEMU_HARDDISK_699b0239-fef5-4b39-83a4-6673e212f6a1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a', 'scsi-SQEMU_QEMU_HARDDISK_d010236c-a8cf-44aa-aea8-1599ad338c7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733249 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.733256 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733269 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733276 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733287 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733297 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733315 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_418d3dac-9e56-4e13-a3ce-7a4dc3cf5cdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733329 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e77990f9--27fa--58e8--a0b8--915245e923bd-osd--block--e77990f9--27fa--58e8--a0b8--915245e923bd', 'dm-uuid-LVM-uDhV6caHL211nYXtqSdoo3op85zXuT4LC4DceUxfDV1jL83Kf3awHkqz08dj0ZJi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733339 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fa87c95d--d840--5309--8296--5c77234dd7e9-osd--block--fa87c95d--d840--5309--8296--5c77234dd7e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Fu4yKb-0Kk3-b0rT-0P6A-kNfI-wm1i-82Giss', 'scsi-0QEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b', 'scsi-SQEMU_QEMU_HARDDISK_646eefac-58ec-4f92-9595-08f65c34439b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733346 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6c03351d--b2bb--55a5--9b19--7d0118202256-osd--block--6c03351d--b2bb--55a5--9b19--7d0118202256', 'dm-uuid-LVM-4S4msUcaLagRA6mssTeNi6WZstmM0v6Px8dOP78xYQMY6K7swxucxsNeXVpx2NZm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733356 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e4752f0c--8dc2--56ff--98d4--03c08b41fecd-osd--block--e4752f0c--8dc2--56ff--98d4--03c08b41fecd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZTDjeK-2HwQ-AeGM-7YlK-G32T-0cCX-cAtmDf', 'scsi-0QEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48', 'scsi-SQEMU_QEMU_HARDDISK_bf0ee1e9-2919-47e1-8e63-acece2856b48'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733369 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733376 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec', 'scsi-SQEMU_QEMU_HARDDISK_c2c77d89-653e-4715-a798-7d926e5d00ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733387 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733394 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733402 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.733410 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733421 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733440 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733451 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733459 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part1', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part14', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part15', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part16', 'scsi-SQEMU_QEMU_HARDDISK_f1fbf9dd-48b0-4566-a31a-874418385eae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733486 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e77990f9--27fa--58e8--a0b8--915245e923bd-osd--block--e77990f9--27fa--58e8--a0b8--915245e923bd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3EmKSY-AkaE-j80P-5gp5-pF4R-nSaf-PH5I5E', 'scsi-0QEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3', 'scsi-SQEMU_QEMU_HARDDISK_5ad63fe0-a9d3-4f97-9b8b-66022c6f76e3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6c03351d--b2bb--55a5--9b19--7d0118202256-osd--block--6c03351d--b2bb--55a5--9b19--7d0118202256'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HedMRi-I0PG-09i6-bTXb-lsmq-3ePu-22hUR5', 'scsi-0QEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645', 'scsi-SQEMU_QEMU_HARDDISK_cb2ae45d-eb27-4723-ba8e-6f14f0885645'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733502 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47', 'scsi-SQEMU_QEMU_HARDDISK_d0bd2e62-a20a-4f0c-a0ab-e9709b4d6b47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733512 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:58:12.733524 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.733531 | orchestrator | 2026-04-09 00:58:12.733538 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 00:58:12.733545 | orchestrator | Thursday 09 April 2026 00:56:32 +0000 (0:00:00.581) 0:00:17.168 ******** 2026-04-09 00:58:12.733552 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.733558 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.733565 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.733571 | orchestrator | 2026-04-09 00:58:12.733578 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 00:58:12.733585 | orchestrator | Thursday 09 April 2026 00:56:32 +0000 (0:00:00.633) 0:00:17.802 ******** 2026-04-09 00:58:12.733591 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.733598 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.733605 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.733612 | orchestrator | 2026-04-09 00:58:12.733619 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:58:12.733626 | orchestrator | Thursday 09 April 2026 00:56:33 +0000 (0:00:00.435) 0:00:18.237 ******** 2026-04-09 00:58:12.733633 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.733640 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.733647 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.733654 | orchestrator | 2026-04-09 00:58:12.733661 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:58:12.733668 | orchestrator | Thursday 09 April 2026 00:56:33 +0000 (0:00:00.605) 0:00:18.843 ******** 2026-04-09 00:58:12.733675 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.733682 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.733732 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.733739 | orchestrator | 2026-04-09 00:58:12.733746 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:58:12.733753 | orchestrator | Thursday 09 April 2026 00:56:34 +0000 (0:00:00.305) 0:00:19.148 ******** 2026-04-09 00:58:12.733759 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.733766 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.733772 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.733779 | orchestrator | 2026-04-09 00:58:12.733785 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:58:12.733796 | orchestrator | Thursday 09 April 2026 00:56:34 +0000 (0:00:00.408) 0:00:19.557 ******** 2026-04-09 00:58:12.733803 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.733809 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.733816 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.733823 | orchestrator | 2026-04-09 00:58:12.733829 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 00:58:12.733836 | orchestrator | Thursday 09 April 2026 00:56:34 +0000 (0:00:00.492) 0:00:20.050 ******** 2026-04-09 00:58:12.733842 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 00:58:12.733849 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 00:58:12.733856 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 00:58:12.733862 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 00:58:12.733869 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 00:58:12.733875 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 00:58:12.733882 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 00:58:12.733888 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 00:58:12.733901 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 00:58:12.733907 | orchestrator | 2026-04-09 00:58:12.733914 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 00:58:12.733921 | orchestrator | Thursday 09 April 2026 00:56:35 +0000 (0:00:00.861) 0:00:20.912 ******** 2026-04-09 00:58:12.733928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:58:12.733935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:58:12.733941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:58:12.733948 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.733954 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 00:58:12.733961 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 00:58:12.733967 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 00:58:12.733974 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.733981 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 00:58:12.733987 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 00:58:12.733994 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 00:58:12.734000 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.734006 | orchestrator | 2026-04-09 00:58:12.734056 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 00:58:12.734065 | orchestrator | Thursday 09 April 2026 00:56:36 +0000 (0:00:00.396) 0:00:21.308 ******** 2026-04-09 00:58:12.734073 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:58:12.734081 | orchestrator | 2026-04-09 00:58:12.734088 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 00:58:12.734096 | orchestrator | Thursday 09 April 2026 00:56:36 +0000 (0:00:00.750) 0:00:22.059 ******** 2026-04-09 00:58:12.734109 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.734116 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.734124 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.734130 | orchestrator | 2026-04-09 00:58:12.734137 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 00:58:12.734145 | orchestrator | Thursday 09 April 2026 00:56:37 +0000 (0:00:00.401) 0:00:22.461 ******** 2026-04-09 00:58:12.734152 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.734159 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.734166 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.734173 | orchestrator | 2026-04-09 00:58:12.734180 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 00:58:12.734187 | orchestrator | Thursday 09 April 2026 00:56:37 +0000 (0:00:00.291) 0:00:22.752 ******** 2026-04-09 00:58:12.734194 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.734201 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.734208 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:58:12.734215 | orchestrator | 2026-04-09 00:58:12.734222 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 00:58:12.734229 | orchestrator | Thursday 09 April 2026 00:56:37 +0000 (0:00:00.310) 0:00:23.063 ******** 2026-04-09 00:58:12.734236 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.734243 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.734250 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.734257 | orchestrator | 2026-04-09 00:58:12.734264 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 00:58:12.734271 | orchestrator | Thursday 09 April 2026 00:56:38 +0000 (0:00:00.713) 0:00:23.776 ******** 2026-04-09 00:58:12.734278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:58:12.734285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:58:12.734292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:58:12.734306 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.734313 | orchestrator | 2026-04-09 00:58:12.734320 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 00:58:12.734327 | orchestrator | Thursday 09 April 2026 00:56:39 +0000 (0:00:00.348) 0:00:24.124 ******** 2026-04-09 00:58:12.734333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:58:12.734339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:58:12.734345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:58:12.734351 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.734357 | orchestrator | 2026-04-09 00:58:12.734364 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 00:58:12.734371 | orchestrator | Thursday 09 April 2026 00:56:39 +0000 (0:00:00.380) 0:00:24.504 ******** 2026-04-09 00:58:12.734382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:58:12.734389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:58:12.734396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:58:12.734403 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.734410 | orchestrator | 2026-04-09 00:58:12.734417 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 00:58:12.734424 | orchestrator | Thursday 09 April 2026 00:56:39 +0000 (0:00:00.352) 0:00:24.857 ******** 2026-04-09 00:58:12.734431 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:58:12.734438 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:58:12.734444 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:58:12.734451 | orchestrator | 2026-04-09 00:58:12.734459 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 00:58:12.734466 | orchestrator | Thursday 09 April 2026 00:56:40 +0000 (0:00:00.334) 0:00:25.192 ******** 2026-04-09 00:58:12.734473 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:58:12.734480 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 00:58:12.734487 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 00:58:12.734494 | orchestrator | 2026-04-09 00:58:12.734501 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 00:58:12.734508 | orchestrator | Thursday 09 April 2026 00:56:40 +0000 (0:00:00.533) 0:00:25.725 ******** 2026-04-09 00:58:12.734515 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:58:12.734521 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:58:12.734528 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:58:12.734534 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 00:58:12.734540 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:58:12.734547 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:58:12.734553 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:58:12.734560 | orchestrator | 2026-04-09 00:58:12.734567 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 00:58:12.734574 | orchestrator | Thursday 09 April 2026 00:56:41 +0000 (0:00:00.822) 0:00:26.548 ******** 2026-04-09 00:58:12.734580 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:58:12.734587 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:58:12.734594 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:58:12.734601 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 00:58:12.734608 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:58:12.734616 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:58:12.734632 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:58:12.734639 | orchestrator | 2026-04-09 00:58:12.734646 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-09 00:58:12.734652 | orchestrator | Thursday 09 April 2026 00:56:43 +0000 (0:00:01.647) 0:00:28.196 ******** 2026-04-09 00:58:12.734659 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:58:12.734666 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:58:12.734673 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-09 00:58:12.734679 | orchestrator | 2026-04-09 00:58:12.734714 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-09 00:58:12.734722 | orchestrator | Thursday 09 April 2026 00:56:43 +0000 (0:00:00.347) 0:00:28.544 ******** 2026-04-09 00:58:12.734729 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:58:12.734737 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:58:12.734744 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:58:12.734751 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:58:12.734762 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:58:12.734769 | orchestrator | 2026-04-09 00:58:12.734776 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-09 00:58:12.734782 | orchestrator | Thursday 09 April 2026 00:57:23 +0000 (0:00:39.594) 0:01:08.138 ******** 2026-04-09 00:58:12.734789 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734796 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734802 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734809 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734815 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734822 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734828 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-09 00:58:12.734835 | orchestrator | 2026-04-09 00:58:12.734842 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-09 00:58:12.734848 | orchestrator | Thursday 09 April 2026 00:57:42 +0000 (0:00:19.464) 0:01:27.602 ******** 2026-04-09 00:58:12.734856 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734863 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734876 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734883 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734889 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734895 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734901 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:58:12.734909 | orchestrator | 2026-04-09 00:58:12.734915 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-09 00:58:12.734922 | orchestrator | Thursday 09 April 2026 00:57:52 +0000 (0:00:09.786) 0:01:37.389 ******** 2026-04-09 00:58:12.734929 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734936 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:58:12.734943 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:58:12.734950 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734956 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:58:12.734968 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:58:12.734975 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.734981 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:58:12.734987 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:58:12.734993 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.735000 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:58:12.735006 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:58:12.735013 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.735020 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:58:12.735026 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:58:12.735033 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:58:12.735040 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:58:12.735047 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:58:12.735054 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-09 00:58:12.735060 | orchestrator | 2026-04-09 00:58:12.735067 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:58:12.735074 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-09 00:58:12.735082 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-09 00:58:12.735089 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-09 00:58:12.735096 | orchestrator | 2026-04-09 00:58:12.735103 | orchestrator | 2026-04-09 00:58:12.735110 | orchestrator | 2026-04-09 00:58:12.735117 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:58:12.735124 | orchestrator | Thursday 09 April 2026 00:58:10 +0000 (0:00:17.975) 0:01:55.364 ******** 2026-04-09 00:58:12.735134 | orchestrator | =============================================================================== 2026-04-09 00:58:12.735141 | orchestrator | create openstack pool(s) ----------------------------------------------- 39.59s 2026-04-09 00:58:12.735152 | orchestrator | generate keys ---------------------------------------------------------- 19.46s 2026-04-09 00:58:12.735159 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.98s 2026-04-09 00:58:12.735166 | orchestrator | get keys from monitors -------------------------------------------------- 9.79s 2026-04-09 00:58:12.735173 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.00s 2026-04-09 00:58:12.735180 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.65s 2026-04-09 00:58:12.735187 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.32s 2026-04-09 00:58:12.735194 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 1.01s 2026-04-09 00:58:12.735200 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.86s 2026-04-09 00:58:12.735207 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.82s 2026-04-09 00:58:12.735214 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.75s 2026-04-09 00:58:12.735221 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.75s 2026-04-09 00:58:12.735228 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.71s 2026-04-09 00:58:12.735234 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.68s 2026-04-09 00:58:12.735242 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.67s 2026-04-09 00:58:12.735249 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.63s 2026-04-09 00:58:12.735256 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.61s 2026-04-09 00:58:12.735263 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2026-04-09 00:58:12.735270 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.58s 2026-04-09 00:58:12.735277 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.57s 2026-04-09 00:58:12.735284 | orchestrator | 2026-04-09 00:58:12 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:12.735290 | orchestrator | 2026-04-09 00:58:12 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:12.735297 | orchestrator | 2026-04-09 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:15.786250 | orchestrator | 2026-04-09 00:58:15 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:58:15.789044 | orchestrator | 2026-04-09 00:58:15 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:15.791874 | orchestrator | 2026-04-09 00:58:15 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:15.791935 | orchestrator | 2026-04-09 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:18.831147 | orchestrator | 2026-04-09 00:58:18 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:58:18.833142 | orchestrator | 2026-04-09 00:58:18 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:18.835050 | orchestrator | 2026-04-09 00:58:18 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:18.835416 | orchestrator | 2026-04-09 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:21.876898 | orchestrator | 2026-04-09 00:58:21 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state STARTED 2026-04-09 00:58:21.879408 | orchestrator | 2026-04-09 00:58:21 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:21.881885 | orchestrator | 2026-04-09 00:58:21 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:21.881991 | orchestrator | 2026-04-09 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:24.925169 | orchestrator | 2026-04-09 00:58:24 | INFO  | Task fd3013c6-5f11-4bd0-b761-3db23da989cf is in state SUCCESS 2026-04-09 00:58:24.926708 | orchestrator | 2026-04-09 00:58:24.926763 | orchestrator | 2026-04-09 00:58:24.926772 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:58:24.926780 | orchestrator | 2026-04-09 00:58:24.926788 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:58:24.926795 | orchestrator | Thursday 09 April 2026 00:56:43 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-04-09 00:58:24.926802 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.926811 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.926818 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.926825 | orchestrator | 2026-04-09 00:58:24.926832 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:58:24.926838 | orchestrator | Thursday 09 April 2026 00:56:43 +0000 (0:00:00.282) 0:00:00.557 ******** 2026-04-09 00:58:24.926845 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-09 00:58:24.926867 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-09 00:58:24.926874 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-09 00:58:24.926881 | orchestrator | 2026-04-09 00:58:24.926888 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-09 00:58:24.926894 | orchestrator | 2026-04-09 00:58:24.926901 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:58:24.926908 | orchestrator | Thursday 09 April 2026 00:56:43 +0000 (0:00:00.265) 0:00:00.822 ******** 2026-04-09 00:58:24.926916 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:58:24.926923 | orchestrator | 2026-04-09 00:58:24.926930 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-09 00:58:24.926937 | orchestrator | Thursday 09 April 2026 00:56:44 +0000 (0:00:00.523) 0:00:01.346 ******** 2026-04-09 00:58:24.926949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:58:24.926998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:58:24.927007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:58:24.927019 | orchestrator | 2026-04-09 00:58:24.927026 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-09 00:58:24.927034 | orchestrator | Thursday 09 April 2026 00:56:45 +0000 (0:00:01.472) 0:00:02.819 ******** 2026-04-09 00:58:24.927040 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.927047 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.927054 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.927062 | orchestrator | 2026-04-09 00:58:24.927166 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:58:24.927175 | orchestrator | Thursday 09 April 2026 00:56:45 +0000 (0:00:00.267) 0:00:03.086 ******** 2026-04-09 00:58:24.927181 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 00:58:24.927348 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 00:58:24.927360 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 00:58:24.927368 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 00:58:24.927376 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 00:58:24.927384 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 00:58:24.927392 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-09 00:58:24.927399 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 00:58:24.927413 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 00:58:24.927422 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 00:58:24.927430 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 00:58:24.927438 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 00:58:24.927446 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 00:58:24.927453 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 00:58:24.927459 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-09 00:58:24.927465 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 00:58:24.927471 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 00:58:24.927478 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 00:58:24.927484 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 00:58:24.927490 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 00:58:24.927497 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 00:58:24.927503 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 00:58:24.927510 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-09 00:58:24.927516 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 00:58:24.927524 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-09 00:58:24.927541 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-09 00:58:24.927548 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-09 00:58:24.927555 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-09 00:58:24.927562 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-09 00:58:24.927569 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-09 00:58:24.927575 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-09 00:58:24.927582 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-09 00:58:24.927589 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-09 00:58:24.927597 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-09 00:58:24.927604 | orchestrator | 2026-04-09 00:58:24.927611 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:58:24.927618 | orchestrator | Thursday 09 April 2026 00:56:46 +0000 (0:00:00.687) 0:00:03.774 ******** 2026-04-09 00:58:24.927625 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.927632 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.927639 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.927646 | orchestrator | 2026-04-09 00:58:24.927653 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:58:24.927659 | orchestrator | Thursday 09 April 2026 00:56:47 +0000 (0:00:00.453) 0:00:04.228 ******** 2026-04-09 00:58:24.927666 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.927701 | orchestrator | 2026-04-09 00:58:24.927715 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:58:24.927721 | orchestrator | Thursday 09 April 2026 00:56:47 +0000 (0:00:00.117) 0:00:04.345 ******** 2026-04-09 00:58:24.927727 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.927732 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.927738 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.927745 | orchestrator | 2026-04-09 00:58:24.927751 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:58:24.927757 | orchestrator | Thursday 09 April 2026 00:56:47 +0000 (0:00:00.297) 0:00:04.642 ******** 2026-04-09 00:58:24.927763 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.927769 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.927776 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.927782 | orchestrator | 2026-04-09 00:58:24.927788 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:58:24.927800 | orchestrator | Thursday 09 April 2026 00:56:47 +0000 (0:00:00.291) 0:00:04.934 ******** 2026-04-09 00:58:24.927806 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.927813 | orchestrator | 2026-04-09 00:58:24.927818 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:58:24.927824 | orchestrator | Thursday 09 April 2026 00:56:47 +0000 (0:00:00.110) 0:00:05.044 ******** 2026-04-09 00:58:24.927831 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.927844 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.927850 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.927856 | orchestrator | 2026-04-09 00:58:24.927863 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:58:24.927869 | orchestrator | Thursday 09 April 2026 00:56:48 +0000 (0:00:00.450) 0:00:05.495 ******** 2026-04-09 00:58:24.927876 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.927882 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.927889 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.927896 | orchestrator | 2026-04-09 00:58:24.927902 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:58:24.927909 | orchestrator | Thursday 09 April 2026 00:56:48 +0000 (0:00:00.299) 0:00:05.795 ******** 2026-04-09 00:58:24.927916 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.927923 | orchestrator | 2026-04-09 00:58:24.927929 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:58:24.927936 | orchestrator | Thursday 09 April 2026 00:56:48 +0000 (0:00:00.118) 0:00:05.913 ******** 2026-04-09 00:58:24.927942 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.927948 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.927955 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.927962 | orchestrator | 2026-04-09 00:58:24.927970 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:58:24.927978 | orchestrator | Thursday 09 April 2026 00:56:49 +0000 (0:00:00.257) 0:00:06.171 ******** 2026-04-09 00:58:24.927985 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.927993 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.928000 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.928006 | orchestrator | 2026-04-09 00:58:24.928014 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:58:24.928022 | orchestrator | Thursday 09 April 2026 00:56:49 +0000 (0:00:00.262) 0:00:06.434 ******** 2026-04-09 00:58:24.928029 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928040 | orchestrator | 2026-04-09 00:58:24.928049 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:58:24.928056 | orchestrator | Thursday 09 April 2026 00:56:49 +0000 (0:00:00.126) 0:00:06.560 ******** 2026-04-09 00:58:24.928063 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928071 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.928078 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.928085 | orchestrator | 2026-04-09 00:58:24.928092 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:58:24.928099 | orchestrator | Thursday 09 April 2026 00:56:49 +0000 (0:00:00.454) 0:00:07.015 ******** 2026-04-09 00:58:24.928106 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.928113 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.928120 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.928128 | orchestrator | 2026-04-09 00:58:24.928137 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:58:24.928144 | orchestrator | Thursday 09 April 2026 00:56:50 +0000 (0:00:00.261) 0:00:07.276 ******** 2026-04-09 00:58:24.928152 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928159 | orchestrator | 2026-04-09 00:58:24.928166 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:58:24.928175 | orchestrator | Thursday 09 April 2026 00:56:50 +0000 (0:00:00.117) 0:00:07.393 ******** 2026-04-09 00:58:24.928182 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928190 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.928197 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.928204 | orchestrator | 2026-04-09 00:58:24.928211 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:58:24.928219 | orchestrator | Thursday 09 April 2026 00:56:50 +0000 (0:00:00.253) 0:00:07.646 ******** 2026-04-09 00:58:24.928227 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.928241 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.928251 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.928260 | orchestrator | 2026-04-09 00:58:24.928267 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:58:24.928275 | orchestrator | Thursday 09 April 2026 00:56:50 +0000 (0:00:00.252) 0:00:07.899 ******** 2026-04-09 00:58:24.928282 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928289 | orchestrator | 2026-04-09 00:58:24.928296 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:58:24.928303 | orchestrator | Thursday 09 April 2026 00:56:51 +0000 (0:00:00.234) 0:00:08.134 ******** 2026-04-09 00:58:24.928312 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928318 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.928325 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.928332 | orchestrator | 2026-04-09 00:58:24.928338 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:58:24.928353 | orchestrator | Thursday 09 April 2026 00:56:51 +0000 (0:00:00.239) 0:00:08.374 ******** 2026-04-09 00:58:24.928359 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.928366 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.928372 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.928378 | orchestrator | 2026-04-09 00:58:24.928385 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:58:24.928392 | orchestrator | Thursday 09 April 2026 00:56:51 +0000 (0:00:00.261) 0:00:08.635 ******** 2026-04-09 00:58:24.928398 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928404 | orchestrator | 2026-04-09 00:58:24.928410 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:58:24.928416 | orchestrator | Thursday 09 April 2026 00:56:51 +0000 (0:00:00.109) 0:00:08.744 ******** 2026-04-09 00:58:24.928422 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928428 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.928434 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.928440 | orchestrator | 2026-04-09 00:58:24.928450 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:58:24.928457 | orchestrator | Thursday 09 April 2026 00:56:51 +0000 (0:00:00.238) 0:00:08.983 ******** 2026-04-09 00:58:24.928463 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.928469 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.928475 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.928481 | orchestrator | 2026-04-09 00:58:24.928488 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:58:24.928494 | orchestrator | Thursday 09 April 2026 00:56:52 +0000 (0:00:00.368) 0:00:09.351 ******** 2026-04-09 00:58:24.928501 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928507 | orchestrator | 2026-04-09 00:58:24.928514 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:58:24.928520 | orchestrator | Thursday 09 April 2026 00:56:52 +0000 (0:00:00.120) 0:00:09.472 ******** 2026-04-09 00:58:24.928527 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928533 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.928539 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.928545 | orchestrator | 2026-04-09 00:58:24.928550 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:58:24.928556 | orchestrator | Thursday 09 April 2026 00:56:52 +0000 (0:00:00.324) 0:00:09.796 ******** 2026-04-09 00:58:24.928561 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.928567 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.928572 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.928578 | orchestrator | 2026-04-09 00:58:24.928584 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:58:24.928591 | orchestrator | Thursday 09 April 2026 00:56:52 +0000 (0:00:00.251) 0:00:10.048 ******** 2026-04-09 00:58:24.928597 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928603 | orchestrator | 2026-04-09 00:58:24.928608 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:58:24.928755 | orchestrator | Thursday 09 April 2026 00:56:53 +0000 (0:00:00.097) 0:00:10.145 ******** 2026-04-09 00:58:24.928767 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928773 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.928780 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.928786 | orchestrator | 2026-04-09 00:58:24.928793 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:58:24.928799 | orchestrator | Thursday 09 April 2026 00:56:53 +0000 (0:00:00.259) 0:00:10.404 ******** 2026-04-09 00:58:24.928805 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:58:24.928812 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:58:24.928818 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:58:24.928824 | orchestrator | 2026-04-09 00:58:24.928830 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:58:24.928836 | orchestrator | Thursday 09 April 2026 00:56:53 +0000 (0:00:00.378) 0:00:10.783 ******** 2026-04-09 00:58:24.928842 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928847 | orchestrator | 2026-04-09 00:58:24.928852 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:58:24.928858 | orchestrator | Thursday 09 April 2026 00:56:53 +0000 (0:00:00.126) 0:00:10.909 ******** 2026-04-09 00:58:24.928864 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.928870 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.928875 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.928880 | orchestrator | 2026-04-09 00:58:24.928886 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-09 00:58:24.928892 | orchestrator | Thursday 09 April 2026 00:56:54 +0000 (0:00:00.252) 0:00:11.162 ******** 2026-04-09 00:58:24.928897 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:58:24.928902 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:58:24.928908 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:58:24.928913 | orchestrator | 2026-04-09 00:58:24.928919 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-09 00:58:24.928926 | orchestrator | Thursday 09 April 2026 00:56:55 +0000 (0:00:01.629) 0:00:12.791 ******** 2026-04-09 00:58:24.928932 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 00:58:24.928939 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 00:58:24.928945 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 00:58:24.928952 | orchestrator | 2026-04-09 00:58:24.928958 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-09 00:58:24.928965 | orchestrator | Thursday 09 April 2026 00:56:57 +0000 (0:00:01.998) 0:00:14.790 ******** 2026-04-09 00:58:24.928971 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 00:58:24.928978 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 00:58:24.928985 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 00:58:24.928991 | orchestrator | 2026-04-09 00:58:24.928998 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-09 00:58:24.929016 | orchestrator | Thursday 09 April 2026 00:56:59 +0000 (0:00:01.932) 0:00:16.722 ******** 2026-04-09 00:58:24.929023 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 00:58:24.929029 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 00:58:24.929036 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 00:58:24.929042 | orchestrator | 2026-04-09 00:58:24.929048 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-09 00:58:24.929071 | orchestrator | Thursday 09 April 2026 00:57:01 +0000 (0:00:01.537) 0:00:18.260 ******** 2026-04-09 00:58:24.929077 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.929084 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.929090 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.929097 | orchestrator | 2026-04-09 00:58:24.929103 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-09 00:58:24.929110 | orchestrator | Thursday 09 April 2026 00:57:01 +0000 (0:00:00.286) 0:00:18.546 ******** 2026-04-09 00:58:24.929116 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.929122 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.929129 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.929134 | orchestrator | 2026-04-09 00:58:24.929141 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:58:24.929147 | orchestrator | Thursday 09 April 2026 00:57:01 +0000 (0:00:00.268) 0:00:18.814 ******** 2026-04-09 00:58:24.929153 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:58:24.929160 | orchestrator | 2026-04-09 00:58:24.929166 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-09 00:58:24.929171 | orchestrator | Thursday 09 April 2026 00:57:02 +0000 (0:00:00.695) 0:00:19.510 ******** 2026-04-09 00:58:24.929221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:58:24.929245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:58:24.929260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:58:24.929267 | orchestrator | 2026-04-09 00:58:24.929274 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-09 00:58:24.929282 | orchestrator | Thursday 09 April 2026 00:57:04 +0000 (0:00:01.657) 0:00:21.167 ******** 2026-04-09 00:58:24.929304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:58:24.929311 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.929324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:58:24.929335 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.929347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:58:24.929354 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.929361 | orchestrator | 2026-04-09 00:58:24.929369 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-09 00:58:24.929378 | orchestrator | Thursday 09 April 2026 00:57:04 +0000 (0:00:00.886) 0:00:22.054 ******** 2026-04-09 00:58:24.929391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:58:24.929402 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.929413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:58:24.929420 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.929436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:58:24.929450 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.929457 | orchestrator | 2026-04-09 00:58:24.929464 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-09 00:58:24.929478 | orchestrator | Thursday 09 April 2026 00:57:06 +0000 (0:00:01.199) 0:00:23.253 ******** 2026-04-09 00:58:24.929488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:58:24.929504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:58:24.929517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:58:24.929529 | orchestrator | 2026-04-09 00:58:24.929536 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:58:24.929543 | orchestrator | Thursday 09 April 2026 00:57:07 +0000 (0:00:01.348) 0:00:24.601 ******** 2026-04-09 00:58:24.929551 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:58:24.929558 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:58:24.929565 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:58:24.929571 | orchestrator | 2026-04-09 00:58:24.929578 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:58:24.929584 | orchestrator | Thursday 09 April 2026 00:57:07 +0000 (0:00:00.268) 0:00:24.870 ******** 2026-04-09 00:58:24.929591 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:58:24.929598 | orchestrator | 2026-04-09 00:58:24.929605 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-09 00:58:24.929615 | orchestrator | Thursday 09 April 2026 00:57:08 +0000 (0:00:00.735) 0:00:25.605 ******** 2026-04-09 00:58:24.929622 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:58:24.929631 | orchestrator | 2026-04-09 00:58:24.929644 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-09 00:58:24.929654 | orchestrator | Thursday 09 April 2026 00:57:10 +0000 (0:00:02.401) 0:00:28.007 ******** 2026-04-09 00:58:24.929661 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:58:24.929667 | orchestrator | 2026-04-09 00:58:24.929726 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-09 00:58:24.929733 | orchestrator | Thursday 09 April 2026 00:57:13 +0000 (0:00:02.540) 0:00:30.547 ******** 2026-04-09 00:58:24.929739 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:58:24.929745 | orchestrator | 2026-04-09 00:58:24.929750 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 00:58:24.929764 | orchestrator | Thursday 09 April 2026 00:57:31 +0000 (0:00:17.568) 0:00:48.116 ******** 2026-04-09 00:58:24.929770 | orchestrator | 2026-04-09 00:58:24.929776 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 00:58:24.929782 | orchestrator | Thursday 09 April 2026 00:57:31 +0000 (0:00:00.058) 0:00:48.174 ******** 2026-04-09 00:58:24.929788 | orchestrator | 2026-04-09 00:58:24.929793 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-09 00:58:24.929799 | orchestrator | Thursday 09 April 2026 00:57:31 +0000 (0:00:00.061) 0:00:48.235 ******** 2026-04-09 00:58:24.929805 | orchestrator | 2026-04-09 00:58:24.929810 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-09 00:58:24.929816 | orchestrator | Thursday 09 April 2026 00:57:31 +0000 (0:00:00.060) 0:00:48.296 ******** 2026-04-09 00:58:24.929823 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:58:24.929830 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:58:24.929837 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:58:24.929842 | orchestrator | 2026-04-09 00:58:24.929848 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:58:24.929854 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 00:58:24.929861 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-09 00:58:24.929867 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-09 00:58:24.929874 | orchestrator | 2026-04-09 00:58:24.929880 | orchestrator | 2026-04-09 00:58:24.929886 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:58:24.929892 | orchestrator | Thursday 09 April 2026 00:58:22 +0000 (0:00:51.213) 0:01:39.510 ******** 2026-04-09 00:58:24.929905 | orchestrator | =============================================================================== 2026-04-09 00:58:24.929911 | orchestrator | horizon : Restart horizon container ------------------------------------ 51.21s 2026-04-09 00:58:24.929916 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.57s 2026-04-09 00:58:24.929922 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.54s 2026-04-09 00:58:24.929927 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.40s 2026-04-09 00:58:24.929933 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.00s 2026-04-09 00:58:24.929939 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.93s 2026-04-09 00:58:24.929945 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.66s 2026-04-09 00:58:24.929951 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.63s 2026-04-09 00:58:24.929956 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.54s 2026-04-09 00:58:24.929962 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.47s 2026-04-09 00:58:24.929968 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.35s 2026-04-09 00:58:24.929974 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.20s 2026-04-09 00:58:24.929980 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.89s 2026-04-09 00:58:24.929986 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2026-04-09 00:58:24.929992 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2026-04-09 00:58:24.929999 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2026-04-09 00:58:24.930005 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-04-09 00:58:24.930011 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2026-04-09 00:58:24.930070 | orchestrator | horizon : Update policy file name --------------------------------------- 0.45s 2026-04-09 00:58:24.930077 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2026-04-09 00:58:24.930083 | orchestrator | 2026-04-09 00:58:24 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:24.930860 | orchestrator | 2026-04-09 00:58:24 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:24.930895 | orchestrator | 2026-04-09 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:27.990859 | orchestrator | 2026-04-09 00:58:27 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:27.992545 | orchestrator | 2026-04-09 00:58:27 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:27.992620 | orchestrator | 2026-04-09 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:31.052845 | orchestrator | 2026-04-09 00:58:31 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:31.054627 | orchestrator | 2026-04-09 00:58:31 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:31.054759 | orchestrator | 2026-04-09 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:34.098881 | orchestrator | 2026-04-09 00:58:34 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:34.100630 | orchestrator | 2026-04-09 00:58:34 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:34.100780 | orchestrator | 2026-04-09 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:37.161498 | orchestrator | 2026-04-09 00:58:37 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:37.164042 | orchestrator | 2026-04-09 00:58:37 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:37.164506 | orchestrator | 2026-04-09 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:40.207438 | orchestrator | 2026-04-09 00:58:40 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:40.207540 | orchestrator | 2026-04-09 00:58:40 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:40.207552 | orchestrator | 2026-04-09 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:43.258599 | orchestrator | 2026-04-09 00:58:43 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:43.264980 | orchestrator | 2026-04-09 00:58:43 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:43.267036 | orchestrator | 2026-04-09 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:46.310221 | orchestrator | 2026-04-09 00:58:46 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state STARTED 2026-04-09 00:58:46.311383 | orchestrator | 2026-04-09 00:58:46 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:46.311434 | orchestrator | 2026-04-09 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:49.364822 | orchestrator | 2026-04-09 00:58:49 | INFO  | Task bb0918f2-190b-4fe7-8918-1a4f0f20014f is in state SUCCESS 2026-04-09 00:58:49.366261 | orchestrator | 2026-04-09 00:58:49 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:49.368138 | orchestrator | 2026-04-09 00:58:49 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:58:49.368382 | orchestrator | 2026-04-09 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:52.414343 | orchestrator | 2026-04-09 00:58:52 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:52.415968 | orchestrator | 2026-04-09 00:58:52 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:58:52.416031 | orchestrator | 2026-04-09 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:55.455559 | orchestrator | 2026-04-09 00:58:55 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:55.455664 | orchestrator | 2026-04-09 00:58:55 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:58:55.455673 | orchestrator | 2026-04-09 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:58.513427 | orchestrator | 2026-04-09 00:58:58 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:58:58.516381 | orchestrator | 2026-04-09 00:58:58 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:58:58.516450 | orchestrator | 2026-04-09 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:01.562071 | orchestrator | 2026-04-09 00:59:01 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:01.562522 | orchestrator | 2026-04-09 00:59:01 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:01.562545 | orchestrator | 2026-04-09 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:04.596723 | orchestrator | 2026-04-09 00:59:04 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:04.597812 | orchestrator | 2026-04-09 00:59:04 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:04.597880 | orchestrator | 2026-04-09 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:07.644007 | orchestrator | 2026-04-09 00:59:07 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:07.644149 | orchestrator | 2026-04-09 00:59:07 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:07.644185 | orchestrator | 2026-04-09 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:10.678707 | orchestrator | 2026-04-09 00:59:10 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:10.681874 | orchestrator | 2026-04-09 00:59:10 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:10.681939 | orchestrator | 2026-04-09 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:13.728712 | orchestrator | 2026-04-09 00:59:13 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:13.729787 | orchestrator | 2026-04-09 00:59:13 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:13.729841 | orchestrator | 2026-04-09 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:16.777180 | orchestrator | 2026-04-09 00:59:16 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:16.779064 | orchestrator | 2026-04-09 00:59:16 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:16.779132 | orchestrator | 2026-04-09 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:19.820076 | orchestrator | 2026-04-09 00:59:19 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:19.822340 | orchestrator | 2026-04-09 00:59:19 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:19.822383 | orchestrator | 2026-04-09 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:22.866775 | orchestrator | 2026-04-09 00:59:22 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:22.868555 | orchestrator | 2026-04-09 00:59:22 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:22.868651 | orchestrator | 2026-04-09 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:25.912767 | orchestrator | 2026-04-09 00:59:25 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:25.915132 | orchestrator | 2026-04-09 00:59:25 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:25.915227 | orchestrator | 2026-04-09 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:28.958780 | orchestrator | 2026-04-09 00:59:28 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:28.960189 | orchestrator | 2026-04-09 00:59:28 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:28.960235 | orchestrator | 2026-04-09 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:32.000457 | orchestrator | 2026-04-09 00:59:32 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:32.003016 | orchestrator | 2026-04-09 00:59:32 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:32.003086 | orchestrator | 2026-04-09 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:35.046868 | orchestrator | 2026-04-09 00:59:35 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:35.048796 | orchestrator | 2026-04-09 00:59:35 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:35.049002 | orchestrator | 2026-04-09 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:38.091797 | orchestrator | 2026-04-09 00:59:38 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:38.091844 | orchestrator | 2026-04-09 00:59:38 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:38.091850 | orchestrator | 2026-04-09 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:41.142541 | orchestrator | 2026-04-09 00:59:41 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:41.142688 | orchestrator | 2026-04-09 00:59:41 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:41.142698 | orchestrator | 2026-04-09 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:44.176095 | orchestrator | 2026-04-09 00:59:44 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:44.177264 | orchestrator | 2026-04-09 00:59:44 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state STARTED 2026-04-09 00:59:44.177291 | orchestrator | 2026-04-09 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:47.220882 | orchestrator | 2026-04-09 00:59:47 | INFO  | Task c7b9ea9b-f137-49d6-8f75-140c6bc837c8 is in state STARTED 2026-04-09 00:59:47.220971 | orchestrator | 2026-04-09 00:59:47 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:47.220979 | orchestrator | 2026-04-09 00:59:47 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 00:59:47.220984 | orchestrator | 2026-04-09 00:59:47 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 00:59:47.222351 | orchestrator | 2026-04-09 00:59:47.222386 | orchestrator | 2026-04-09 00:59:47.222391 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-09 00:59:47.222396 | orchestrator | 2026-04-09 00:59:47.222400 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-09 00:59:47.222405 | orchestrator | Thursday 09 April 2026 00:58:13 +0000 (0:00:00.226) 0:00:00.226 ******** 2026-04-09 00:59:47.222409 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-09 00:59:47.222414 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222418 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222422 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:59:47.222426 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222430 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-09 00:59:47.222434 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-09 00:59:47.222438 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:59:47.222442 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-09 00:59:47.222446 | orchestrator | 2026-04-09 00:59:47.222450 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-09 00:59:47.222454 | orchestrator | Thursday 09 April 2026 00:58:18 +0000 (0:00:04.754) 0:00:04.980 ******** 2026-04-09 00:59:47.222457 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-09 00:59:47.222476 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222480 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222484 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:59:47.222487 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222491 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-09 00:59:47.222495 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-09 00:59:47.222499 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:59:47.222503 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-09 00:59:47.222506 | orchestrator | 2026-04-09 00:59:47.222510 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-09 00:59:47.222514 | orchestrator | Thursday 09 April 2026 00:58:23 +0000 (0:00:04.377) 0:00:09.357 ******** 2026-04-09 00:59:47.222519 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:59:47.222523 | orchestrator | 2026-04-09 00:59:47.222526 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-09 00:59:47.222530 | orchestrator | Thursday 09 April 2026 00:58:23 +0000 (0:00:00.918) 0:00:10.276 ******** 2026-04-09 00:59:47.222534 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-09 00:59:47.222538 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222542 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222546 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:59:47.222550 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222554 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-09 00:59:47.222558 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-09 00:59:47.222561 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:59:47.222596 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-09 00:59:47.222600 | orchestrator | 2026-04-09 00:59:47.222604 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-09 00:59:47.222608 | orchestrator | Thursday 09 April 2026 00:58:37 +0000 (0:00:13.713) 0:00:23.990 ******** 2026-04-09 00:59:47.222620 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-09 00:59:47.222624 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-09 00:59:47.222630 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-09 00:59:47.222636 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-09 00:59:47.222652 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-09 00:59:47.222662 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-09 00:59:47.222670 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-09 00:59:47.222676 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-09 00:59:47.222695 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-09 00:59:47.222700 | orchestrator | 2026-04-09 00:59:47.222705 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-09 00:59:47.222711 | orchestrator | Thursday 09 April 2026 00:58:40 +0000 (0:00:03.310) 0:00:27.301 ******** 2026-04-09 00:59:47.222717 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-09 00:59:47.222723 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222728 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222733 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:59:47.222739 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 00:59:47.222744 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-09 00:59:47.222750 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-09 00:59:47.222755 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:59:47.222760 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-09 00:59:47.222766 | orchestrator | 2026-04-09 00:59:47.222771 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:59:47.222777 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:59:47.222784 | orchestrator | 2026-04-09 00:59:47.222790 | orchestrator | 2026-04-09 00:59:47.222796 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:59:47.222802 | orchestrator | Thursday 09 April 2026 00:58:47 +0000 (0:00:06.607) 0:00:33.908 ******** 2026-04-09 00:59:47.222808 | orchestrator | =============================================================================== 2026-04-09 00:59:47.222814 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.71s 2026-04-09 00:59:47.222820 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.61s 2026-04-09 00:59:47.222826 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.75s 2026-04-09 00:59:47.222831 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.38s 2026-04-09 00:59:47.222837 | orchestrator | Check if target directories exist --------------------------------------- 3.31s 2026-04-09 00:59:47.222843 | orchestrator | Create share directory -------------------------------------------------- 0.92s 2026-04-09 00:59:47.222849 | orchestrator | 2026-04-09 00:59:47.222855 | orchestrator | 2026-04-09 00:59:47.222861 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-09 00:59:47.222868 | orchestrator | 2026-04-09 00:59:47.222872 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-09 00:59:47.222876 | orchestrator | Thursday 09 April 2026 00:58:50 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-04-09 00:59:47.222880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-09 00:59:47.222885 | orchestrator | 2026-04-09 00:59:47.222888 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-09 00:59:47.222892 | orchestrator | Thursday 09 April 2026 00:58:51 +0000 (0:00:00.199) 0:00:00.457 ******** 2026-04-09 00:59:47.222896 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-09 00:59:47.222900 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-09 00:59:47.222904 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-09 00:59:47.222907 | orchestrator | 2026-04-09 00:59:47.222911 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-09 00:59:47.222915 | orchestrator | Thursday 09 April 2026 00:58:52 +0000 (0:00:01.462) 0:00:01.920 ******** 2026-04-09 00:59:47.222923 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-09 00:59:47.222927 | orchestrator | 2026-04-09 00:59:47.222931 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-09 00:59:47.222935 | orchestrator | Thursday 09 April 2026 00:58:53 +0000 (0:00:01.083) 0:00:03.003 ******** 2026-04-09 00:59:47.222939 | orchestrator | changed: [testbed-manager] 2026-04-09 00:59:47.222943 | orchestrator | 2026-04-09 00:59:47.222946 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-09 00:59:47.222954 | orchestrator | Thursday 09 April 2026 00:58:54 +0000 (0:00:00.811) 0:00:03.815 ******** 2026-04-09 00:59:47.222959 | orchestrator | changed: [testbed-manager] 2026-04-09 00:59:47.222964 | orchestrator | 2026-04-09 00:59:47.222968 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-09 00:59:47.222972 | orchestrator | Thursday 09 April 2026 00:58:55 +0000 (0:00:00.892) 0:00:04.708 ******** 2026-04-09 00:59:47.222976 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-09 00:59:47.222981 | orchestrator | ok: [testbed-manager] 2026-04-09 00:59:47.222985 | orchestrator | 2026-04-09 00:59:47.222990 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-09 00:59:47.223000 | orchestrator | Thursday 09 April 2026 00:59:35 +0000 (0:00:40.130) 0:00:44.839 ******** 2026-04-09 00:59:47.223006 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-09 00:59:47.223013 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-09 00:59:47.223021 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-09 00:59:47.223029 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-09 00:59:47.223036 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-09 00:59:47.223042 | orchestrator | 2026-04-09 00:59:47.223048 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-09 00:59:47.223054 | orchestrator | Thursday 09 April 2026 00:59:39 +0000 (0:00:04.097) 0:00:48.936 ******** 2026-04-09 00:59:47.223059 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-09 00:59:47.223066 | orchestrator | 2026-04-09 00:59:47.223072 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-09 00:59:47.223078 | orchestrator | Thursday 09 April 2026 00:59:40 +0000 (0:00:00.620) 0:00:49.557 ******** 2026-04-09 00:59:47.223083 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:47.223090 | orchestrator | 2026-04-09 00:59:47.223096 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-09 00:59:47.223103 | orchestrator | Thursday 09 April 2026 00:59:40 +0000 (0:00:00.138) 0:00:49.695 ******** 2026-04-09 00:59:47.223110 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:47.223116 | orchestrator | 2026-04-09 00:59:47.223123 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-09 00:59:47.223129 | orchestrator | Thursday 09 April 2026 00:59:40 +0000 (0:00:00.305) 0:00:50.001 ******** 2026-04-09 00:59:47.223136 | orchestrator | changed: [testbed-manager] 2026-04-09 00:59:47.223141 | orchestrator | 2026-04-09 00:59:47.223146 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-09 00:59:47.223150 | orchestrator | Thursday 09 April 2026 00:59:42 +0000 (0:00:01.417) 0:00:51.418 ******** 2026-04-09 00:59:47.223154 | orchestrator | changed: [testbed-manager] 2026-04-09 00:59:47.223158 | orchestrator | 2026-04-09 00:59:47.223163 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-09 00:59:47.223167 | orchestrator | Thursday 09 April 2026 00:59:42 +0000 (0:00:00.690) 0:00:52.109 ******** 2026-04-09 00:59:47.223171 | orchestrator | changed: [testbed-manager] 2026-04-09 00:59:47.223175 | orchestrator | 2026-04-09 00:59:47.223180 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-09 00:59:47.223184 | orchestrator | Thursday 09 April 2026 00:59:43 +0000 (0:00:00.573) 0:00:52.682 ******** 2026-04-09 00:59:47.223193 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-09 00:59:47.223198 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-09 00:59:47.223202 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-09 00:59:47.223206 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-09 00:59:47.223210 | orchestrator | 2026-04-09 00:59:47.223215 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:59:47.223219 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:59:47.223224 | orchestrator | 2026-04-09 00:59:47.223228 | orchestrator | 2026-04-09 00:59:47.223232 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:59:47.223237 | orchestrator | Thursday 09 April 2026 00:59:44 +0000 (0:00:01.320) 0:00:54.003 ******** 2026-04-09 00:59:47.223241 | orchestrator | =============================================================================== 2026-04-09 00:59:47.223246 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.13s 2026-04-09 00:59:47.223250 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.10s 2026-04-09 00:59:47.223254 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.46s 2026-04-09 00:59:47.223259 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.42s 2026-04-09 00:59:47.223263 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.32s 2026-04-09 00:59:47.223268 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.08s 2026-04-09 00:59:47.223272 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.89s 2026-04-09 00:59:47.223277 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.81s 2026-04-09 00:59:47.223281 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2026-04-09 00:59:47.223288 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.62s 2026-04-09 00:59:47.223294 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2026-04-09 00:59:47.223303 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2026-04-09 00:59:47.223312 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2026-04-09 00:59:47.223318 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-04-09 00:59:47.223328 | orchestrator | 2026-04-09 00:59:47 | INFO  | Task 01426604-a60a-4f5c-a43a-7b39f0885cbb is in state SUCCESS 2026-04-09 00:59:47.223335 | orchestrator | 2026-04-09 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:50.256364 | orchestrator | 2026-04-09 00:59:50 | INFO  | Task c7b9ea9b-f137-49d6-8f75-140c6bc837c8 is in state SUCCESS 2026-04-09 00:59:50.256505 | orchestrator | 2026-04-09 00:59:50 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state STARTED 2026-04-09 00:59:50.257375 | orchestrator | 2026-04-09 00:59:50 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 00:59:50.258687 | orchestrator | 2026-04-09 00:59:50 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 00:59:50.258749 | orchestrator | 2026-04-09 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:53.293019 | orchestrator | 2026-04-09 00:59:53.293159 | orchestrator | 2026-04-09 00:59:53.293169 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:59:53.293178 | orchestrator | 2026-04-09 00:59:53.293184 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:59:53.293192 | orchestrator | Thursday 09 April 2026 00:59:47 +0000 (0:00:00.172) 0:00:00.172 ******** 2026-04-09 00:59:53.293198 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:53.293206 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:53.293237 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:53.293280 | orchestrator | 2026-04-09 00:59:53.293288 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:59:53.293294 | orchestrator | Thursday 09 April 2026 00:59:48 +0000 (0:00:00.372) 0:00:00.545 ******** 2026-04-09 00:59:53.293300 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-09 00:59:53.293307 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-09 00:59:53.293312 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-09 00:59:53.293318 | orchestrator | 2026-04-09 00:59:53.293507 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-09 00:59:53.293513 | orchestrator | 2026-04-09 00:59:53.293517 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-09 00:59:53.293521 | orchestrator | Thursday 09 April 2026 00:59:48 +0000 (0:00:00.434) 0:00:00.979 ******** 2026-04-09 00:59:53.293525 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:53.293528 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:53.293532 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:53.293536 | orchestrator | 2026-04-09 00:59:53.293539 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:59:53.293544 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:59:53.293549 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:59:53.293552 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:59:53.293604 | orchestrator | 2026-04-09 00:59:53.293608 | orchestrator | 2026-04-09 00:59:53.293612 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:59:53.293615 | orchestrator | Thursday 09 April 2026 00:59:49 +0000 (0:00:01.081) 0:00:02.061 ******** 2026-04-09 00:59:53.293619 | orchestrator | =============================================================================== 2026-04-09 00:59:53.293623 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.08s 2026-04-09 00:59:53.293627 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-04-09 00:59:53.293631 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-04-09 00:59:53.293635 | orchestrator | 2026-04-09 00:59:53.293639 | orchestrator | 2026-04-09 00:59:53.293649 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:59:53.293653 | orchestrator | 2026-04-09 00:59:53.293656 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:59:53.293660 | orchestrator | Thursday 09 April 2026 00:56:43 +0000 (0:00:00.291) 0:00:00.291 ******** 2026-04-09 00:59:53.293664 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:53.293668 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:53.293671 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:53.293675 | orchestrator | 2026-04-09 00:59:53.293679 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:59:53.293683 | orchestrator | Thursday 09 April 2026 00:56:43 +0000 (0:00:00.293) 0:00:00.584 ******** 2026-04-09 00:59:53.293687 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-09 00:59:53.293690 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-09 00:59:53.293694 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-09 00:59:53.293698 | orchestrator | 2026-04-09 00:59:53.293701 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-09 00:59:53.293705 | orchestrator | 2026-04-09 00:59:53.293709 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 00:59:53.293713 | orchestrator | Thursday 09 April 2026 00:56:43 +0000 (0:00:00.261) 0:00:00.846 ******** 2026-04-09 00:59:53.293717 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:59:53.293729 | orchestrator | 2026-04-09 00:59:53.293733 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-09 00:59:53.293737 | orchestrator | Thursday 09 April 2026 00:56:44 +0000 (0:00:00.541) 0:00:01.387 ******** 2026-04-09 00:59:53.293781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.293788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.293793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.293798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293846 | orchestrator | 2026-04-09 00:59:53.293850 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-09 00:59:53.293854 | orchestrator | Thursday 09 April 2026 00:56:46 +0000 (0:00:02.139) 0:00:03.527 ******** 2026-04-09 00:59:53.293858 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.293862 | orchestrator | 2026-04-09 00:59:53.293866 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-09 00:59:53.293870 | orchestrator | Thursday 09 April 2026 00:56:46 +0000 (0:00:00.121) 0:00:03.649 ******** 2026-04-09 00:59:53.293873 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.293877 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.293881 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.293885 | orchestrator | 2026-04-09 00:59:53.293888 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-09 00:59:53.293897 | orchestrator | Thursday 09 April 2026 00:56:46 +0000 (0:00:00.255) 0:00:03.905 ******** 2026-04-09 00:59:53.293901 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:59:53.293905 | orchestrator | 2026-04-09 00:59:53.293908 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 00:59:53.293912 | orchestrator | Thursday 09 April 2026 00:56:47 +0000 (0:00:00.828) 0:00:04.733 ******** 2026-04-09 00:59:53.293916 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:59:53.293920 | orchestrator | 2026-04-09 00:59:53.293924 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-09 00:59:53.293927 | orchestrator | Thursday 09 April 2026 00:56:48 +0000 (0:00:00.686) 0:00:05.420 ******** 2026-04-09 00:59:53.293934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.293951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.293956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.293960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.293995 | orchestrator | 2026-04-09 00:59:53.293999 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-09 00:59:53.294003 | orchestrator | Thursday 09 April 2026 00:56:51 +0000 (0:00:03.171) 0:00:08.592 ******** 2026-04-09 00:59:53.294007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:59:53.294047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:59:53.294058 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.294069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:59:53.294074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:59:53.294085 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.294089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:59:53.294096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:59:53.294160 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.294166 | orchestrator | 2026-04-09 00:59:53.294170 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-09 00:59:53.294175 | orchestrator | Thursday 09 April 2026 00:56:51 +0000 (0:00:00.484) 0:00:09.076 ******** 2026-04-09 00:59:53.294180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:59:53.294189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:59:53.294198 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.294209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:59:53.294218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:59:53.294227 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.294232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:59:53.294240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:59:53.294249 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.294254 | orchestrator | 2026-04-09 00:59:53.294258 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-09 00:59:53.294262 | orchestrator | Thursday 09 April 2026 00:56:52 +0000 (0:00:00.777) 0:00:09.853 ******** 2026-04-09 00:59:53.294274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http',2026-04-09 00:59:53 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 00:59:53.294280 | orchestrator | 2026-04-09 00:59:53 | INFO  | Task 79a78001-0cac-4966-a5e0-f07ca125b09b is in state SUCCESS 2026-04-09 00:59:53.294284 | orchestrator | 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.294289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.294298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.294302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.294310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.294317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.294322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.294331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.294335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.294340 | orchestrator | 2026-04-09 00:59:53.294344 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-09 00:59:53.294349 | orchestrator | Thursday 09 April 2026 00:56:55 +0000 (0:00:03.183) 0:00:13.036 ******** 2026-04-09 00:59:53.294353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.294360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.294381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.294390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.294405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.294410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.294428 | orchestrator | 2026-04-09 00:59:53.294433 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-09 00:59:53.294437 | orchestrator | Thursday 09 April 2026 00:57:00 +0000 (0:00:04.918) 0:00:17.955 ******** 2026-04-09 00:59:53.294441 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:53.294446 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:59:53.294450 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:59:53.294455 | orchestrator | 2026-04-09 00:59:53.294459 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-09 00:59:53.294464 | orchestrator | Thursday 09 April 2026 00:57:02 +0000 (0:00:01.497) 0:00:19.452 ******** 2026-04-09 00:59:53.294468 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.294473 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.294477 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.294482 | orchestrator | 2026-04-09 00:59:53.294486 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-09 00:59:53.294490 | orchestrator | Thursday 09 April 2026 00:57:03 +0000 (0:00:01.068) 0:00:20.521 ******** 2026-04-09 00:59:53.294495 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.294499 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.294503 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.294507 | orchestrator | 2026-04-09 00:59:53.294512 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-09 00:59:53.294516 | orchestrator | Thursday 09 April 2026 00:57:03 +0000 (0:00:00.310) 0:00:20.831 ******** 2026-04-09 00:59:53.294519 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.294523 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.294527 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.294530 | orchestrator | 2026-04-09 00:59:53.294534 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-09 00:59:53.294538 | orchestrator | Thursday 09 April 2026 00:57:03 +0000 (0:00:00.277) 0:00:21.109 ******** 2026-04-09 00:59:53.294542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:59:53.294549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:59:53.294616 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.294620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:59:53.294625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:59:53.294633 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.294641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-09 00:59:53.294649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:59:53.294656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:59:53.294661 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.294665 | orchestrator | 2026-04-09 00:59:53.294669 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 00:59:53.294673 | orchestrator | Thursday 09 April 2026 00:57:04 +0000 (0:00:00.554) 0:00:21.664 ******** 2026-04-09 00:59:53.294677 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.294681 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.294685 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.294688 | orchestrator | 2026-04-09 00:59:53.294692 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-09 00:59:53.294696 | orchestrator | Thursday 09 April 2026 00:57:04 +0000 (0:00:00.421) 0:00:22.085 ******** 2026-04-09 00:59:53.294700 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 00:59:53.294705 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 00:59:53.294710 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 00:59:53.294714 | orchestrator | 2026-04-09 00:59:53.294718 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-09 00:59:53.294722 | orchestrator | Thursday 09 April 2026 00:57:06 +0000 (0:00:01.600) 0:00:23.686 ******** 2026-04-09 00:59:53.294726 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:59:53.294729 | orchestrator | 2026-04-09 00:59:53.294733 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-09 00:59:53.294737 | orchestrator | Thursday 09 April 2026 00:57:07 +0000 (0:00:01.089) 0:00:24.775 ******** 2026-04-09 00:59:53.294741 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.294745 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.294749 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.294753 | orchestrator | 2026-04-09 00:59:53.294757 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-09 00:59:53.294761 | orchestrator | Thursday 09 April 2026 00:57:08 +0000 (0:00:00.544) 0:00:25.320 ******** 2026-04-09 00:59:53.294765 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 00:59:53.294769 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:59:53.294773 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 00:59:53.294777 | orchestrator | 2026-04-09 00:59:53.294781 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-09 00:59:53.294785 | orchestrator | Thursday 09 April 2026 00:57:09 +0000 (0:00:01.055) 0:00:26.375 ******** 2026-04-09 00:59:53.294793 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:53.294797 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:53.294801 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:53.294805 | orchestrator | 2026-04-09 00:59:53.294809 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-09 00:59:53.294813 | orchestrator | Thursday 09 April 2026 00:57:09 +0000 (0:00:00.475) 0:00:26.850 ******** 2026-04-09 00:59:53.294819 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 00:59:53.294825 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 00:59:53.294831 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 00:59:53.294839 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 00:59:53.294849 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 00:59:53.294857 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 00:59:53.294862 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 00:59:53.294867 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 00:59:53.294879 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 00:59:53.294884 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 00:59:53.294891 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 00:59:53.294899 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 00:59:53.294904 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 00:59:53.294910 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 00:59:53.294920 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 00:59:53.294926 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 00:59:53.294932 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 00:59:53.294937 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 00:59:53.294943 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 00:59:53.294949 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 00:59:53.294955 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 00:59:53.294961 | orchestrator | 2026-04-09 00:59:53.294966 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-09 00:59:53.294972 | orchestrator | Thursday 09 April 2026 00:57:18 +0000 (0:00:08.825) 0:00:35.675 ******** 2026-04-09 00:59:53.294978 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 00:59:53.294984 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 00:59:53.294991 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 00:59:53.294997 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 00:59:53.295002 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 00:59:53.295014 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 00:59:53.295020 | orchestrator | 2026-04-09 00:59:53.295025 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-09 00:59:53.295031 | orchestrator | Thursday 09 April 2026 00:57:21 +0000 (0:00:02.545) 0:00:38.221 ******** 2026-04-09 00:59:53.295037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.295048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.295061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-09 00:59:53.295067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.295078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.295083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:59:53.295089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.295098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.295104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:59:53.295111 | orchestrator | 2026-04-09 00:59:53.295120 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 00:59:53.295127 | orchestrator | Thursday 09 April 2026 00:57:23 +0000 (0:00:02.421) 0:00:40.643 ******** 2026-04-09 00:59:53.295132 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.295136 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.295140 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.295143 | orchestrator | 2026-04-09 00:59:53.295147 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-09 00:59:53.295151 | orchestrator | Thursday 09 April 2026 00:57:23 +0000 (0:00:00.359) 0:00:41.002 ******** 2026-04-09 00:59:53.295155 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:53.295161 | orchestrator | 2026-04-09 00:59:53.295166 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-09 00:59:53.295176 | orchestrator | Thursday 09 April 2026 00:57:26 +0000 (0:00:02.349) 0:00:43.351 ******** 2026-04-09 00:59:53.295181 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:53.295186 | orchestrator | 2026-04-09 00:59:53.295196 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-09 00:59:53.295204 | orchestrator | Thursday 09 April 2026 00:57:28 +0000 (0:00:02.409) 0:00:45.761 ******** 2026-04-09 00:59:53.295209 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:53.295215 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:53.295220 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:53.295226 | orchestrator | 2026-04-09 00:59:53.295232 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-09 00:59:53.295238 | orchestrator | Thursday 09 April 2026 00:57:29 +0000 (0:00:00.776) 0:00:46.538 ******** 2026-04-09 00:59:53.295243 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:53.295248 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:53.295253 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:53.295259 | orchestrator | 2026-04-09 00:59:53.295265 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-09 00:59:53.295271 | orchestrator | Thursday 09 April 2026 00:57:29 +0000 (0:00:00.252) 0:00:46.790 ******** 2026-04-09 00:59:53.295277 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.295282 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.295288 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.295294 | orchestrator | 2026-04-09 00:59:53.295300 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-09 00:59:53.295306 | orchestrator | Thursday 09 April 2026 00:57:29 +0000 (0:00:00.288) 0:00:47.079 ******** 2026-04-09 00:59:53.295311 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:53.295318 | orchestrator | 2026-04-09 00:59:53.295324 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-09 00:59:53.295329 | orchestrator | Thursday 09 April 2026 00:57:46 +0000 (0:00:16.706) 0:01:03.785 ******** 2026-04-09 00:59:53.295335 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:53.295341 | orchestrator | 2026-04-09 00:59:53.295347 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 00:59:53.295353 | orchestrator | Thursday 09 April 2026 00:57:59 +0000 (0:00:12.399) 0:01:16.185 ******** 2026-04-09 00:59:53.295359 | orchestrator | 2026-04-09 00:59:53.295365 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 00:59:53.295371 | orchestrator | Thursday 09 April 2026 00:57:59 +0000 (0:00:00.061) 0:01:16.246 ******** 2026-04-09 00:59:53.295377 | orchestrator | 2026-04-09 00:59:53.295383 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-09 00:59:53.295389 | orchestrator | Thursday 09 April 2026 00:57:59 +0000 (0:00:00.062) 0:01:16.309 ******** 2026-04-09 00:59:53.295396 | orchestrator | 2026-04-09 00:59:53.295401 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-09 00:59:53.295404 | orchestrator | Thursday 09 April 2026 00:57:59 +0000 (0:00:00.063) 0:01:16.372 ******** 2026-04-09 00:59:53.295408 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:53.295412 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:59:53.295416 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:59:53.295419 | orchestrator | 2026-04-09 00:59:53.295423 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-09 00:59:53.295427 | orchestrator | Thursday 09 April 2026 00:58:39 +0000 (0:00:40.379) 0:01:56.752 ******** 2026-04-09 00:59:53.295431 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:53.295434 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:59:53.295438 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:59:53.295442 | orchestrator | 2026-04-09 00:59:53.295445 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-09 00:59:53.295449 | orchestrator | Thursday 09 April 2026 00:58:44 +0000 (0:00:04.853) 0:02:01.606 ******** 2026-04-09 00:59:53.295453 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:53.295462 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:59:53.295466 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:59:53.295470 | orchestrator | 2026-04-09 00:59:53.295473 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 00:59:53.295477 | orchestrator | Thursday 09 April 2026 00:58:55 +0000 (0:00:11.098) 0:02:12.705 ******** 2026-04-09 00:59:53.295485 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:59:53.295489 | orchestrator | 2026-04-09 00:59:53.295492 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-09 00:59:53.295496 | orchestrator | Thursday 09 April 2026 00:58:56 +0000 (0:00:00.540) 0:02:13.245 ******** 2026-04-09 00:59:53.295500 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:53.295503 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:53.295507 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:53.295511 | orchestrator | 2026-04-09 00:59:53.295515 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-09 00:59:53.295518 | orchestrator | Thursday 09 April 2026 00:58:56 +0000 (0:00:00.813) 0:02:14.058 ******** 2026-04-09 00:59:53.295522 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:53.295526 | orchestrator | 2026-04-09 00:59:53.295529 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-09 00:59:53.295534 | orchestrator | Thursday 09 April 2026 00:58:58 +0000 (0:00:01.658) 0:02:15.717 ******** 2026-04-09 00:59:53.295547 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-09 00:59:53.295552 | orchestrator | 2026-04-09 00:59:53.295579 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-09 00:59:53.295591 | orchestrator | Thursday 09 April 2026 00:59:11 +0000 (0:00:13.231) 0:02:28.948 ******** 2026-04-09 00:59:53.295596 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-09 00:59:53.295601 | orchestrator | 2026-04-09 00:59:53.295607 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-09 00:59:53.295612 | orchestrator | Thursday 09 April 2026 00:59:39 +0000 (0:00:27.829) 0:02:56.778 ******** 2026-04-09 00:59:53.295618 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-09 00:59:53.295625 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-09 00:59:53.295631 | orchestrator | 2026-04-09 00:59:53.295636 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-09 00:59:53.295642 | orchestrator | Thursday 09 April 2026 00:59:47 +0000 (0:00:07.679) 0:03:04.458 ******** 2026-04-09 00:59:53.295648 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.295653 | orchestrator | 2026-04-09 00:59:53.295659 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-09 00:59:53.295664 | orchestrator | Thursday 09 April 2026 00:59:47 +0000 (0:00:00.098) 0:03:04.556 ******** 2026-04-09 00:59:53.295669 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.295675 | orchestrator | 2026-04-09 00:59:53.295679 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-09 00:59:53.295684 | orchestrator | Thursday 09 April 2026 00:59:47 +0000 (0:00:00.093) 0:03:04.650 ******** 2026-04-09 00:59:53.295689 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.295694 | orchestrator | 2026-04-09 00:59:53.295700 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-09 00:59:53.295705 | orchestrator | Thursday 09 April 2026 00:59:47 +0000 (0:00:00.096) 0:03:04.746 ******** 2026-04-09 00:59:53.295711 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.295716 | orchestrator | 2026-04-09 00:59:53.295722 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-09 00:59:53.295727 | orchestrator | Thursday 09 April 2026 00:59:47 +0000 (0:00:00.276) 0:03:05.023 ******** 2026-04-09 00:59:53.295733 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:53.295745 | orchestrator | 2026-04-09 00:59:53.295751 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 00:59:53.295757 | orchestrator | Thursday 09 April 2026 00:59:51 +0000 (0:00:04.100) 0:03:09.123 ******** 2026-04-09 00:59:53.295764 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:53.295770 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:53.295776 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:53.295780 | orchestrator | 2026-04-09 00:59:53.295784 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:59:53.295788 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 00:59:53.295794 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:59:53.295798 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 00:59:53.295802 | orchestrator | 2026-04-09 00:59:53.295806 | orchestrator | 2026-04-09 00:59:53.295809 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:59:53.295813 | orchestrator | Thursday 09 April 2026 00:59:52 +0000 (0:00:00.989) 0:03:10.112 ******** 2026-04-09 00:59:53.295817 | orchestrator | =============================================================================== 2026-04-09 00:59:53.295820 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 40.38s 2026-04-09 00:59:53.295824 | orchestrator | service-ks-register : keystone | Creating services --------------------- 27.83s 2026-04-09 00:59:53.295828 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.71s 2026-04-09 00:59:53.295832 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.23s 2026-04-09 00:59:53.295835 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.40s 2026-04-09 00:59:53.295839 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.10s 2026-04-09 00:59:53.295843 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.83s 2026-04-09 00:59:53.295846 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.68s 2026-04-09 00:59:53.295850 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.92s 2026-04-09 00:59:53.295859 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.85s 2026-04-09 00:59:53.295863 | orchestrator | keystone : Creating default user role ----------------------------------- 4.10s 2026-04-09 00:59:53.295866 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.18s 2026-04-09 00:59:53.295870 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.17s 2026-04-09 00:59:53.295874 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.55s 2026-04-09 00:59:53.295877 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.42s 2026-04-09 00:59:53.295881 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.41s 2026-04-09 00:59:53.295885 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.35s 2026-04-09 00:59:53.295893 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.14s 2026-04-09 00:59:53.295897 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.66s 2026-04-09 00:59:53.295901 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.60s 2026-04-09 00:59:53.295905 | orchestrator | 2026-04-09 00:59:53 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 00:59:53.295909 | orchestrator | 2026-04-09 00:59:53 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 00:59:53.295912 | orchestrator | 2026-04-09 00:59:53 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 00:59:53.295920 | orchestrator | 2026-04-09 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:56.313971 | orchestrator | 2026-04-09 00:59:56 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 00:59:56.314316 | orchestrator | 2026-04-09 00:59:56 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 00:59:56.314842 | orchestrator | 2026-04-09 00:59:56 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 00:59:56.315458 | orchestrator | 2026-04-09 00:59:56 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 00:59:56.316289 | orchestrator | 2026-04-09 00:59:56 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 00:59:56.316351 | orchestrator | 2026-04-09 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:59.344501 | orchestrator | 2026-04-09 00:59:59 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 00:59:59.344640 | orchestrator | 2026-04-09 00:59:59 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 00:59:59.344653 | orchestrator | 2026-04-09 00:59:59 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 00:59:59.344661 | orchestrator | 2026-04-09 00:59:59 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 00:59:59.344668 | orchestrator | 2026-04-09 00:59:59 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 00:59:59.344676 | orchestrator | 2026-04-09 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:02.392095 | orchestrator | 2026-04-09 01:00:02 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 01:00:02.395893 | orchestrator | 2026-04-09 01:00:02 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:02.398170 | orchestrator | 2026-04-09 01:00:02 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:02.400196 | orchestrator | 2026-04-09 01:00:02 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:02.402149 | orchestrator | 2026-04-09 01:00:02 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:02.402195 | orchestrator | 2026-04-09 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:05.431462 | orchestrator | 2026-04-09 01:00:05 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 01:00:05.433641 | orchestrator | 2026-04-09 01:00:05 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:05.435445 | orchestrator | 2026-04-09 01:00:05 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:05.438197 | orchestrator | 2026-04-09 01:00:05 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:05.439643 | orchestrator | 2026-04-09 01:00:05 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:05.440014 | orchestrator | 2026-04-09 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:08.476335 | orchestrator | 2026-04-09 01:00:08 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 01:00:08.476949 | orchestrator | 2026-04-09 01:00:08 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:08.477667 | orchestrator | 2026-04-09 01:00:08 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:08.478460 | orchestrator | 2026-04-09 01:00:08 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:08.479216 | orchestrator | 2026-04-09 01:00:08 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:08.479453 | orchestrator | 2026-04-09 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:11.529629 | orchestrator | 2026-04-09 01:00:11 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 01:00:11.532642 | orchestrator | 2026-04-09 01:00:11 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:11.536591 | orchestrator | 2026-04-09 01:00:11 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:11.539214 | orchestrator | 2026-04-09 01:00:11 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:11.541427 | orchestrator | 2026-04-09 01:00:11 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:11.541471 | orchestrator | 2026-04-09 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:14.591575 | orchestrator | 2026-04-09 01:00:14 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 01:00:14.591643 | orchestrator | 2026-04-09 01:00:14 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:14.591649 | orchestrator | 2026-04-09 01:00:14 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:14.591654 | orchestrator | 2026-04-09 01:00:14 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:14.591658 | orchestrator | 2026-04-09 01:00:14 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:14.591663 | orchestrator | 2026-04-09 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:17.662319 | orchestrator | 2026-04-09 01:00:17 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 01:00:17.662414 | orchestrator | 2026-04-09 01:00:17 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:17.662423 | orchestrator | 2026-04-09 01:00:17 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:17.662431 | orchestrator | 2026-04-09 01:00:17 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:17.662440 | orchestrator | 2026-04-09 01:00:17 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:17.662448 | orchestrator | 2026-04-09 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:20.643261 | orchestrator | 2026-04-09 01:00:20 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 01:00:20.643434 | orchestrator | 2026-04-09 01:00:20 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:20.643905 | orchestrator | 2026-04-09 01:00:20 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:20.644383 | orchestrator | 2026-04-09 01:00:20 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:20.645088 | orchestrator | 2026-04-09 01:00:20 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:20.645112 | orchestrator | 2026-04-09 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:23.679735 | orchestrator | 2026-04-09 01:00:23 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 01:00:23.680365 | orchestrator | 2026-04-09 01:00:23 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:23.681636 | orchestrator | 2026-04-09 01:00:23 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:23.682429 | orchestrator | 2026-04-09 01:00:23 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:23.683243 | orchestrator | 2026-04-09 01:00:23 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:23.683417 | orchestrator | 2026-04-09 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:26.710489 | orchestrator | 2026-04-09 01:00:26 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 01:00:26.712285 | orchestrator | 2026-04-09 01:00:26 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:26.714733 | orchestrator | 2026-04-09 01:00:26 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:26.715774 | orchestrator | 2026-04-09 01:00:26 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:26.717103 | orchestrator | 2026-04-09 01:00:26 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:26.717157 | orchestrator | 2026-04-09 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:29.747380 | orchestrator | 2026-04-09 01:00:29 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state STARTED 2026-04-09 01:00:29.750540 | orchestrator | 2026-04-09 01:00:29 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:29.751824 | orchestrator | 2026-04-09 01:00:29 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:29.752898 | orchestrator | 2026-04-09 01:00:29 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:29.755756 | orchestrator | 2026-04-09 01:00:29 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:29.755796 | orchestrator | 2026-04-09 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:32.794791 | orchestrator | 2026-04-09 01:00:32 | INFO  | Task ec42de98-ac27-4675-8711-b238a7afb392 is in state SUCCESS 2026-04-09 01:00:32.798955 | orchestrator | 2026-04-09 01:00:32 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:00:32.800788 | orchestrator | 2026-04-09 01:00:32 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:32.802735 | orchestrator | 2026-04-09 01:00:32 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:32.806000 | orchestrator | 2026-04-09 01:00:32 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:32.808462 | orchestrator | 2026-04-09 01:00:32 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:32.808506 | orchestrator | 2026-04-09 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:35.837315 | orchestrator | 2026-04-09 01:00:35 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:00:35.837408 | orchestrator | 2026-04-09 01:00:35 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:35.838504 | orchestrator | 2026-04-09 01:00:35 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:35.838966 | orchestrator | 2026-04-09 01:00:35 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state STARTED 2026-04-09 01:00:35.839550 | orchestrator | 2026-04-09 01:00:35 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:35.839607 | orchestrator | 2026-04-09 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:38.876028 | orchestrator | 2026-04-09 01:00:38 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:00:38.876343 | orchestrator | 2026-04-09 01:00:38 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:38.877113 | orchestrator | 2026-04-09 01:00:38 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:38.877610 | orchestrator | 2026-04-09 01:00:38 | INFO  | Task 315e3207-7b04-4e97-9636-5ee94567de35 is in state SUCCESS 2026-04-09 01:00:38.877920 | orchestrator | 2026-04-09 01:00:38.877937 | orchestrator | 2026-04-09 01:00:38.877943 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:00:38.877949 | orchestrator | 2026-04-09 01:00:38.877955 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:00:38.877961 | orchestrator | Thursday 09 April 2026 00:59:54 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-04-09 01:00:38.877975 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:00:38.877982 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:00:38.877993 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:00:38.877999 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:00:38.878004 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:00:38.878010 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:00:38.878046 | orchestrator | ok: [testbed-manager] 2026-04-09 01:00:38.878052 | orchestrator | 2026-04-09 01:00:38.878058 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:00:38.878075 | orchestrator | Thursday 09 April 2026 00:59:54 +0000 (0:00:00.765) 0:00:01.050 ******** 2026-04-09 01:00:38.878081 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-09 01:00:38.878087 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-09 01:00:38.878092 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-09 01:00:38.878097 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-09 01:00:38.878103 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-09 01:00:38.878108 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-09 01:00:38.878113 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-09 01:00:38.878120 | orchestrator | 2026-04-09 01:00:38.878125 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-09 01:00:38.878130 | orchestrator | 2026-04-09 01:00:38.878135 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-09 01:00:38.878141 | orchestrator | Thursday 09 April 2026 00:59:55 +0000 (0:00:00.838) 0:00:01.889 ******** 2026-04-09 01:00:38.878147 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-09 01:00:38.878153 | orchestrator | 2026-04-09 01:00:38.878159 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-09 01:00:38.878164 | orchestrator | Thursday 09 April 2026 00:59:57 +0000 (0:00:01.789) 0:00:03.678 ******** 2026-04-09 01:00:38.878169 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-04-09 01:00:38.878175 | orchestrator | 2026-04-09 01:00:38.878180 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-09 01:00:38.878185 | orchestrator | Thursday 09 April 2026 01:00:02 +0000 (0:00:05.327) 0:00:09.006 ******** 2026-04-09 01:00:38.878192 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-09 01:00:38.878203 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-09 01:00:38.878237 | orchestrator | 2026-04-09 01:00:38.878248 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-09 01:00:38.878257 | orchestrator | Thursday 09 April 2026 01:00:09 +0000 (0:00:06.520) 0:00:15.527 ******** 2026-04-09 01:00:38.878285 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:00:38.878294 | orchestrator | 2026-04-09 01:00:38.878302 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-09 01:00:38.878310 | orchestrator | Thursday 09 April 2026 01:00:12 +0000 (0:00:03.496) 0:00:19.023 ******** 2026-04-09 01:00:38.878318 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-04-09 01:00:38.878327 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:00:38.878417 | orchestrator | 2026-04-09 01:00:38.878423 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-09 01:00:38.878428 | orchestrator | Thursday 09 April 2026 01:00:17 +0000 (0:00:04.908) 0:00:23.931 ******** 2026-04-09 01:00:38.878472 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:00:38.878480 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-04-09 01:00:38.878486 | orchestrator | 2026-04-09 01:00:38.878491 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-09 01:00:38.878496 | orchestrator | Thursday 09 April 2026 01:00:24 +0000 (0:00:07.087) 0:00:31.019 ******** 2026-04-09 01:00:38.878501 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-04-09 01:00:38.878526 | orchestrator | 2026-04-09 01:00:38.878531 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:00:38.878537 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:38.878543 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:38.878549 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:38.878554 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:38.878559 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:38.878576 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:38.878581 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:38.878586 | orchestrator | 2026-04-09 01:00:38.878592 | orchestrator | 2026-04-09 01:00:38.878597 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:00:38.878602 | orchestrator | Thursday 09 April 2026 01:00:31 +0000 (0:00:06.265) 0:00:37.284 ******** 2026-04-09 01:00:38.878607 | orchestrator | =============================================================================== 2026-04-09 01:00:38.878612 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.09s 2026-04-09 01:00:38.878617 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.52s 2026-04-09 01:00:38.878630 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.27s 2026-04-09 01:00:38.878635 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 5.33s 2026-04-09 01:00:38.878643 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.91s 2026-04-09 01:00:38.878652 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.50s 2026-04-09 01:00:38.878661 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.79s 2026-04-09 01:00:38.878674 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-04-09 01:00:38.878693 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2026-04-09 01:00:38.878701 | orchestrator | 2026-04-09 01:00:38.878710 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 01:00:38.878718 | orchestrator | 2.16.14 2026-04-09 01:00:38.878728 | orchestrator | 2026-04-09 01:00:38.878736 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-09 01:00:38.878744 | orchestrator | 2026-04-09 01:00:38.878752 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-09 01:00:38.878760 | orchestrator | Thursday 09 April 2026 00:59:48 +0000 (0:00:00.174) 0:00:00.174 ******** 2026-04-09 01:00:38.878769 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:38.878778 | orchestrator | 2026-04-09 01:00:38.878786 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-09 01:00:38.878794 | orchestrator | Thursday 09 April 2026 00:59:50 +0000 (0:00:01.893) 0:00:02.067 ******** 2026-04-09 01:00:38.878802 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:38.878811 | orchestrator | 2026-04-09 01:00:38.878819 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-09 01:00:38.878825 | orchestrator | Thursday 09 April 2026 00:59:51 +0000 (0:00:01.062) 0:00:03.129 ******** 2026-04-09 01:00:38.878830 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:38.878835 | orchestrator | 2026-04-09 01:00:38.878840 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-09 01:00:38.878845 | orchestrator | Thursday 09 April 2026 00:59:52 +0000 (0:00:01.035) 0:00:04.165 ******** 2026-04-09 01:00:38.878850 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:38.878855 | orchestrator | 2026-04-09 01:00:38.878860 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-09 01:00:38.878865 | orchestrator | Thursday 09 April 2026 00:59:53 +0000 (0:00:01.124) 0:00:05.289 ******** 2026-04-09 01:00:38.878870 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:38.878875 | orchestrator | 2026-04-09 01:00:38.878881 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-09 01:00:38.878886 | orchestrator | Thursday 09 April 2026 00:59:54 +0000 (0:00:00.837) 0:00:06.126 ******** 2026-04-09 01:00:38.878891 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:38.878896 | orchestrator | 2026-04-09 01:00:38.878901 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-09 01:00:38.878906 | orchestrator | Thursday 09 April 2026 00:59:55 +0000 (0:00:00.889) 0:00:07.016 ******** 2026-04-09 01:00:38.878911 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:38.878917 | orchestrator | 2026-04-09 01:00:38.878926 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-09 01:00:38.878934 | orchestrator | Thursday 09 April 2026 00:59:56 +0000 (0:00:01.067) 0:00:08.083 ******** 2026-04-09 01:00:38.878942 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:38.878950 | orchestrator | 2026-04-09 01:00:38.878958 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-09 01:00:38.878966 | orchestrator | Thursday 09 April 2026 00:59:57 +0000 (0:00:01.193) 0:00:09.276 ******** 2026-04-09 01:00:38.878973 | orchestrator | changed: [testbed-manager] 2026-04-09 01:00:38.878982 | orchestrator | 2026-04-09 01:00:38.878990 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-09 01:00:38.878998 | orchestrator | Thursday 09 April 2026 01:00:11 +0000 (0:00:14.052) 0:00:23.329 ******** 2026-04-09 01:00:38.879018 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:00:38.879028 | orchestrator | 2026-04-09 01:00:38.879037 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-09 01:00:38.879043 | orchestrator | 2026-04-09 01:00:38.879048 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-09 01:00:38.879053 | orchestrator | Thursday 09 April 2026 01:00:11 +0000 (0:00:00.158) 0:00:23.488 ******** 2026-04-09 01:00:38.879058 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:00:38.879069 | orchestrator | 2026-04-09 01:00:38.879075 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-09 01:00:38.879080 | orchestrator | 2026-04-09 01:00:38.879085 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-09 01:00:38.879090 | orchestrator | Thursday 09 April 2026 01:00:23 +0000 (0:00:11.862) 0:00:35.351 ******** 2026-04-09 01:00:38.879095 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:00:38.879101 | orchestrator | 2026-04-09 01:00:38.879106 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-09 01:00:38.879111 | orchestrator | 2026-04-09 01:00:38.879116 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-09 01:00:38.879129 | orchestrator | Thursday 09 April 2026 01:00:35 +0000 (0:00:11.509) 0:00:46.860 ******** 2026-04-09 01:00:38.879135 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:00:38.879141 | orchestrator | 2026-04-09 01:00:38.879147 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:00:38.879154 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 01:00:38.879160 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:38.879172 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:38.879178 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:00:38.879184 | orchestrator | 2026-04-09 01:00:38.879190 | orchestrator | 2026-04-09 01:00:38.879195 | orchestrator | 2026-04-09 01:00:38.879200 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:00:38.879205 | orchestrator | Thursday 09 April 2026 01:00:36 +0000 (0:00:01.363) 0:00:48.224 ******** 2026-04-09 01:00:38.879210 | orchestrator | =============================================================================== 2026-04-09 01:00:38.879215 | orchestrator | Restart ceph manager service ------------------------------------------- 24.74s 2026-04-09 01:00:38.879221 | orchestrator | Create admin user ------------------------------------------------------ 14.05s 2026-04-09 01:00:38.879226 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.89s 2026-04-09 01:00:38.879231 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2026-04-09 01:00:38.879236 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.12s 2026-04-09 01:00:38.879241 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.07s 2026-04-09 01:00:38.879246 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.06s 2026-04-09 01:00:38.879251 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.04s 2026-04-09 01:00:38.879256 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.89s 2026-04-09 01:00:38.879261 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.84s 2026-04-09 01:00:38.879266 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-04-09 01:00:38.879271 | orchestrator | 2026-04-09 01:00:38 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:38.879277 | orchestrator | 2026-04-09 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:41.906365 | orchestrator | 2026-04-09 01:00:41 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:00:41.907319 | orchestrator | 2026-04-09 01:00:41 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:41.907361 | orchestrator | 2026-04-09 01:00:41 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:41.907734 | orchestrator | 2026-04-09 01:00:41 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:41.907749 | orchestrator | 2026-04-09 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:44.939183 | orchestrator | 2026-04-09 01:00:44 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:00:44.939990 | orchestrator | 2026-04-09 01:00:44 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:44.941801 | orchestrator | 2026-04-09 01:00:44 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:44.942310 | orchestrator | 2026-04-09 01:00:44 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:44.942360 | orchestrator | 2026-04-09 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:47.966414 | orchestrator | 2026-04-09 01:00:47 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:00:47.966794 | orchestrator | 2026-04-09 01:00:47 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:47.967566 | orchestrator | 2026-04-09 01:00:47 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:47.969254 | orchestrator | 2026-04-09 01:00:47 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:47.969308 | orchestrator | 2026-04-09 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:50.991465 | orchestrator | 2026-04-09 01:00:50 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:00:50.992975 | orchestrator | 2026-04-09 01:00:50 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:50.993558 | orchestrator | 2026-04-09 01:00:50 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:50.995224 | orchestrator | 2026-04-09 01:00:50 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:50.995264 | orchestrator | 2026-04-09 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:54.020535 | orchestrator | 2026-04-09 01:00:54 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:00:54.020952 | orchestrator | 2026-04-09 01:00:54 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:54.021703 | orchestrator | 2026-04-09 01:00:54 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:54.022330 | orchestrator | 2026-04-09 01:00:54 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:54.022363 | orchestrator | 2026-04-09 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:57.044622 | orchestrator | 2026-04-09 01:00:57 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:00:57.044902 | orchestrator | 2026-04-09 01:00:57 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:00:57.045625 | orchestrator | 2026-04-09 01:00:57 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:00:57.048308 | orchestrator | 2026-04-09 01:00:57 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:00:57.048390 | orchestrator | 2026-04-09 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:00.148051 | orchestrator | 2026-04-09 01:01:00 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:00.148658 | orchestrator | 2026-04-09 01:01:00 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:00.149201 | orchestrator | 2026-04-09 01:01:00 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:00.150126 | orchestrator | 2026-04-09 01:01:00 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:00.151046 | orchestrator | 2026-04-09 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:03.176053 | orchestrator | 2026-04-09 01:01:03 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:03.176216 | orchestrator | 2026-04-09 01:01:03 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:03.177602 | orchestrator | 2026-04-09 01:01:03 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:03.178123 | orchestrator | 2026-04-09 01:01:03 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:03.178150 | orchestrator | 2026-04-09 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:06.205159 | orchestrator | 2026-04-09 01:01:06 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:06.205329 | orchestrator | 2026-04-09 01:01:06 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:06.206214 | orchestrator | 2026-04-09 01:01:06 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:06.207487 | orchestrator | 2026-04-09 01:01:06 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:06.207529 | orchestrator | 2026-04-09 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:09.276273 | orchestrator | 2026-04-09 01:01:09 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:09.276732 | orchestrator | 2026-04-09 01:01:09 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:09.277581 | orchestrator | 2026-04-09 01:01:09 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:09.278241 | orchestrator | 2026-04-09 01:01:09 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:09.278281 | orchestrator | 2026-04-09 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:12.302804 | orchestrator | 2026-04-09 01:01:12 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:12.303249 | orchestrator | 2026-04-09 01:01:12 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:12.303897 | orchestrator | 2026-04-09 01:01:12 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:12.304405 | orchestrator | 2026-04-09 01:01:12 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:12.304590 | orchestrator | 2026-04-09 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:15.323939 | orchestrator | 2026-04-09 01:01:15 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:15.324214 | orchestrator | 2026-04-09 01:01:15 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:15.325069 | orchestrator | 2026-04-09 01:01:15 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:15.325651 | orchestrator | 2026-04-09 01:01:15 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:15.325723 | orchestrator | 2026-04-09 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:18.351251 | orchestrator | 2026-04-09 01:01:18 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:18.351344 | orchestrator | 2026-04-09 01:01:18 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:18.351354 | orchestrator | 2026-04-09 01:01:18 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:18.351444 | orchestrator | 2026-04-09 01:01:18 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:18.351455 | orchestrator | 2026-04-09 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:21.388952 | orchestrator | 2026-04-09 01:01:21 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:21.389222 | orchestrator | 2026-04-09 01:01:21 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:21.389747 | orchestrator | 2026-04-09 01:01:21 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:21.390404 | orchestrator | 2026-04-09 01:01:21 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:21.390432 | orchestrator | 2026-04-09 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:24.414606 | orchestrator | 2026-04-09 01:01:24 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:24.415783 | orchestrator | 2026-04-09 01:01:24 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:24.418312 | orchestrator | 2026-04-09 01:01:24 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:24.419279 | orchestrator | 2026-04-09 01:01:24 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:24.419305 | orchestrator | 2026-04-09 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:27.449867 | orchestrator | 2026-04-09 01:01:27 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:27.450360 | orchestrator | 2026-04-09 01:01:27 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:27.451348 | orchestrator | 2026-04-09 01:01:27 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:27.452336 | orchestrator | 2026-04-09 01:01:27 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:27.452441 | orchestrator | 2026-04-09 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:30.486553 | orchestrator | 2026-04-09 01:01:30 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:30.487612 | orchestrator | 2026-04-09 01:01:30 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:30.488252 | orchestrator | 2026-04-09 01:01:30 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:30.489025 | orchestrator | 2026-04-09 01:01:30 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:30.489137 | orchestrator | 2026-04-09 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:33.525965 | orchestrator | 2026-04-09 01:01:33 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:33.530613 | orchestrator | 2026-04-09 01:01:33 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:33.530680 | orchestrator | 2026-04-09 01:01:33 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:33.532478 | orchestrator | 2026-04-09 01:01:33 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:33.532978 | orchestrator | 2026-04-09 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:36.578220 | orchestrator | 2026-04-09 01:01:36 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:36.582346 | orchestrator | 2026-04-09 01:01:36 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:36.584507 | orchestrator | 2026-04-09 01:01:36 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:36.589511 | orchestrator | 2026-04-09 01:01:36 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:36.589587 | orchestrator | 2026-04-09 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:39.630275 | orchestrator | 2026-04-09 01:01:39 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:39.630344 | orchestrator | 2026-04-09 01:01:39 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:39.630712 | orchestrator | 2026-04-09 01:01:39 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:39.631795 | orchestrator | 2026-04-09 01:01:39 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:39.631853 | orchestrator | 2026-04-09 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:42.662569 | orchestrator | 2026-04-09 01:01:42 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:42.663967 | orchestrator | 2026-04-09 01:01:42 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:42.664031 | orchestrator | 2026-04-09 01:01:42 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:42.665963 | orchestrator | 2026-04-09 01:01:42 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:42.666005 | orchestrator | 2026-04-09 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:45.707173 | orchestrator | 2026-04-09 01:01:45 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:45.709306 | orchestrator | 2026-04-09 01:01:45 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:45.711217 | orchestrator | 2026-04-09 01:01:45 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:45.713873 | orchestrator | 2026-04-09 01:01:45 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:45.713932 | orchestrator | 2026-04-09 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:48.757253 | orchestrator | 2026-04-09 01:01:48 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:48.761519 | orchestrator | 2026-04-09 01:01:48 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:48.762055 | orchestrator | 2026-04-09 01:01:48 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:48.763390 | orchestrator | 2026-04-09 01:01:48 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:48.763418 | orchestrator | 2026-04-09 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:51.807568 | orchestrator | 2026-04-09 01:01:51 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:51.807643 | orchestrator | 2026-04-09 01:01:51 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:51.807671 | orchestrator | 2026-04-09 01:01:51 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:51.807676 | orchestrator | 2026-04-09 01:01:51 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:51.807681 | orchestrator | 2026-04-09 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:54.843070 | orchestrator | 2026-04-09 01:01:54 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:54.844216 | orchestrator | 2026-04-09 01:01:54 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:54.846098 | orchestrator | 2026-04-09 01:01:54 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:54.847431 | orchestrator | 2026-04-09 01:01:54 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:54.847466 | orchestrator | 2026-04-09 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:57.886720 | orchestrator | 2026-04-09 01:01:57 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:01:57.888870 | orchestrator | 2026-04-09 01:01:57 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:01:57.892654 | orchestrator | 2026-04-09 01:01:57 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:01:57.894785 | orchestrator | 2026-04-09 01:01:57 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:01:57.895476 | orchestrator | 2026-04-09 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:00.940177 | orchestrator | 2026-04-09 01:02:00 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:00.940955 | orchestrator | 2026-04-09 01:02:00 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:00.942158 | orchestrator | 2026-04-09 01:02:00 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:00.943080 | orchestrator | 2026-04-09 01:02:00 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:00.943105 | orchestrator | 2026-04-09 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:03.981628 | orchestrator | 2026-04-09 01:02:03 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:03.983112 | orchestrator | 2026-04-09 01:02:03 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:03.984138 | orchestrator | 2026-04-09 01:02:03 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:03.985349 | orchestrator | 2026-04-09 01:02:03 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:03.985473 | orchestrator | 2026-04-09 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:07.051738 | orchestrator | 2026-04-09 01:02:07 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:07.055282 | orchestrator | 2026-04-09 01:02:07 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:07.057163 | orchestrator | 2026-04-09 01:02:07 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:07.058591 | orchestrator | 2026-04-09 01:02:07 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:07.058709 | orchestrator | 2026-04-09 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:10.106715 | orchestrator | 2026-04-09 01:02:10 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:10.107498 | orchestrator | 2026-04-09 01:02:10 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:10.108525 | orchestrator | 2026-04-09 01:02:10 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:10.109692 | orchestrator | 2026-04-09 01:02:10 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:10.109719 | orchestrator | 2026-04-09 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:13.172380 | orchestrator | 2026-04-09 01:02:13 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:13.172550 | orchestrator | 2026-04-09 01:02:13 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:13.173386 | orchestrator | 2026-04-09 01:02:13 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:13.175112 | orchestrator | 2026-04-09 01:02:13 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:13.175192 | orchestrator | 2026-04-09 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:16.217566 | orchestrator | 2026-04-09 01:02:16 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:16.218195 | orchestrator | 2026-04-09 01:02:16 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:16.218730 | orchestrator | 2026-04-09 01:02:16 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:16.219485 | orchestrator | 2026-04-09 01:02:16 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:16.219521 | orchestrator | 2026-04-09 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:19.250257 | orchestrator | 2026-04-09 01:02:19 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:19.251576 | orchestrator | 2026-04-09 01:02:19 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:19.251618 | orchestrator | 2026-04-09 01:02:19 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:19.253783 | orchestrator | 2026-04-09 01:02:19 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:19.253838 | orchestrator | 2026-04-09 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:22.291186 | orchestrator | 2026-04-09 01:02:22 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:22.291943 | orchestrator | 2026-04-09 01:02:22 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:22.293133 | orchestrator | 2026-04-09 01:02:22 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:22.293361 | orchestrator | 2026-04-09 01:02:22 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:22.293828 | orchestrator | 2026-04-09 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:25.331025 | orchestrator | 2026-04-09 01:02:25 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:25.331078 | orchestrator | 2026-04-09 01:02:25 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:25.331083 | orchestrator | 2026-04-09 01:02:25 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:25.331087 | orchestrator | 2026-04-09 01:02:25 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:25.331104 | orchestrator | 2026-04-09 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:28.358472 | orchestrator | 2026-04-09 01:02:28 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:28.358843 | orchestrator | 2026-04-09 01:02:28 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:28.359919 | orchestrator | 2026-04-09 01:02:28 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:28.362193 | orchestrator | 2026-04-09 01:02:28 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:28.362255 | orchestrator | 2026-04-09 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:31.397047 | orchestrator | 2026-04-09 01:02:31 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:31.398296 | orchestrator | 2026-04-09 01:02:31 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:31.400447 | orchestrator | 2026-04-09 01:02:31 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:31.402190 | orchestrator | 2026-04-09 01:02:31 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:31.402456 | orchestrator | 2026-04-09 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:34.441520 | orchestrator | 2026-04-09 01:02:34 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:34.442955 | orchestrator | 2026-04-09 01:02:34 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:34.444520 | orchestrator | 2026-04-09 01:02:34 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:34.445961 | orchestrator | 2026-04-09 01:02:34 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:34.446006 | orchestrator | 2026-04-09 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:37.490518 | orchestrator | 2026-04-09 01:02:37 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:37.491769 | orchestrator | 2026-04-09 01:02:37 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:37.493440 | orchestrator | 2026-04-09 01:02:37 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state STARTED 2026-04-09 01:02:37.494632 | orchestrator | 2026-04-09 01:02:37 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:37.494904 | orchestrator | 2026-04-09 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:40.533696 | orchestrator | 2026-04-09 01:02:40 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:40.535236 | orchestrator | 2026-04-09 01:02:40 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:40.537229 | orchestrator | 2026-04-09 01:02:40 | INFO  | Task 444318cb-150f-406e-aa74-573253fcd176 is in state SUCCESS 2026-04-09 01:02:40.540072 | orchestrator | 2026-04-09 01:02:40.540148 | orchestrator | 2026-04-09 01:02:40.540195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:02:40.540204 | orchestrator | 2026-04-09 01:02:40.540212 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:02:40.540219 | orchestrator | Thursday 09 April 2026 00:59:54 +0000 (0:00:00.232) 0:00:00.232 ******** 2026-04-09 01:02:40.540240 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:02:40.540249 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:02:40.540287 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:02:40.540294 | orchestrator | 2026-04-09 01:02:40.540301 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:02:40.540308 | orchestrator | Thursday 09 April 2026 00:59:54 +0000 (0:00:00.228) 0:00:00.461 ******** 2026-04-09 01:02:40.540315 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-09 01:02:40.540323 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-09 01:02:40.540329 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-09 01:02:40.540336 | orchestrator | 2026-04-09 01:02:40.540341 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-09 01:02:40.540347 | orchestrator | 2026-04-09 01:02:40.540353 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 01:02:40.540360 | orchestrator | Thursday 09 April 2026 00:59:55 +0000 (0:00:00.316) 0:00:00.777 ******** 2026-04-09 01:02:40.540367 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:02:40.540388 | orchestrator | 2026-04-09 01:02:40.540394 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-09 01:02:40.540400 | orchestrator | Thursday 09 April 2026 00:59:55 +0000 (0:00:00.868) 0:00:01.645 ******** 2026-04-09 01:02:40.540407 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-09 01:02:40.540412 | orchestrator | 2026-04-09 01:02:40.540418 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-09 01:02:40.540425 | orchestrator | Thursday 09 April 2026 01:00:00 +0000 (0:00:04.672) 0:00:06.318 ******** 2026-04-09 01:02:40.540432 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-09 01:02:40.540440 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-09 01:02:40.540446 | orchestrator | 2026-04-09 01:02:40.540453 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-09 01:02:40.540459 | orchestrator | Thursday 09 April 2026 01:00:07 +0000 (0:00:06.726) 0:00:13.044 ******** 2026-04-09 01:02:40.540465 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-09 01:02:40.540472 | orchestrator | 2026-04-09 01:02:40.540478 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-09 01:02:40.540485 | orchestrator | Thursday 09 April 2026 01:00:11 +0000 (0:00:03.890) 0:00:16.934 ******** 2026-04-09 01:02:40.540492 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-09 01:02:40.540500 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:02:40.540507 | orchestrator | 2026-04-09 01:02:40.540513 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-09 01:02:40.540518 | orchestrator | Thursday 09 April 2026 01:00:15 +0000 (0:00:03.992) 0:00:20.927 ******** 2026-04-09 01:02:40.540525 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:02:40.540532 | orchestrator | 2026-04-09 01:02:40.540538 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-09 01:02:40.540544 | orchestrator | Thursday 09 April 2026 01:00:19 +0000 (0:00:03.904) 0:00:24.831 ******** 2026-04-09 01:02:40.540550 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-09 01:02:40.540556 | orchestrator | 2026-04-09 01:02:40.540563 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-09 01:02:40.540569 | orchestrator | Thursday 09 April 2026 01:00:23 +0000 (0:00:04.514) 0:00:29.345 ******** 2026-04-09 01:02:40.540604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.540623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.540631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.540642 | orchestrator | 2026-04-09 01:02:40.540648 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 01:02:40.540654 | orchestrator | Thursday 09 April 2026 01:00:27 +0000 (0:00:03.439) 0:00:32.785 ******** 2026-04-09 01:02:40.540662 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:02:40.540669 | orchestrator | 2026-04-09 01:02:40.540676 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-09 01:02:40.540691 | orchestrator | Thursday 09 April 2026 01:00:27 +0000 (0:00:00.664) 0:00:33.449 ******** 2026-04-09 01:02:40.540698 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:40.540705 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:40.540712 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:40.540718 | orchestrator | 2026-04-09 01:02:40.540724 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-09 01:02:40.540735 | orchestrator | Thursday 09 April 2026 01:00:31 +0000 (0:00:04.188) 0:00:37.638 ******** 2026-04-09 01:02:40.540743 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 01:02:40.540750 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 01:02:40.540758 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 01:02:40.540765 | orchestrator | 2026-04-09 01:02:40.540773 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-09 01:02:40.540782 | orchestrator | Thursday 09 April 2026 01:00:33 +0000 (0:00:01.638) 0:00:39.276 ******** 2026-04-09 01:02:40.540788 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 01:02:40.540794 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 01:02:40.540800 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 01:02:40.540806 | orchestrator | 2026-04-09 01:02:40.540813 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-09 01:02:40.540819 | orchestrator | Thursday 09 April 2026 01:00:34 +0000 (0:00:01.417) 0:00:40.693 ******** 2026-04-09 01:02:40.540825 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:02:40.540832 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:02:40.540838 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:02:40.540845 | orchestrator | 2026-04-09 01:02:40.540851 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-09 01:02:40.540858 | orchestrator | Thursday 09 April 2026 01:00:35 +0000 (0:00:00.775) 0:00:41.469 ******** 2026-04-09 01:02:40.540864 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.540871 | orchestrator | 2026-04-09 01:02:40.540878 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-09 01:02:40.540885 | orchestrator | Thursday 09 April 2026 01:00:35 +0000 (0:00:00.133) 0:00:41.602 ******** 2026-04-09 01:02:40.540891 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.540897 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.540904 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.540911 | orchestrator | 2026-04-09 01:02:40.540917 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 01:02:40.540929 | orchestrator | Thursday 09 April 2026 01:00:36 +0000 (0:00:00.422) 0:00:42.025 ******** 2026-04-09 01:02:40.540935 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:02:40.540941 | orchestrator | 2026-04-09 01:02:40.540948 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-09 01:02:40.540954 | orchestrator | Thursday 09 April 2026 01:00:37 +0000 (0:00:01.017) 0:00:43.042 ******** 2026-04-09 01:02:40.540961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.540977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.540986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.540999 | orchestrator | 2026-04-09 01:02:40.541006 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-09 01:02:40.541012 | orchestrator | Thursday 09 April 2026 01:00:43 +0000 (0:00:06.516) 0:00:49.558 ******** 2026-04-09 01:02:40.541028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:40.541035 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.541042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:40.541056 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.541071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:40.541078 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.541085 | orchestrator | 2026-04-09 01:02:40.541092 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-09 01:02:40.541099 | orchestrator | Thursday 09 April 2026 01:00:46 +0000 (0:00:02.523) 0:00:52.082 ******** 2026-04-09 01:02:40.541106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:40.541119 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.541126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:40.541132 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.541152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 01:02:40.541165 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.541172 | orchestrator | 2026-04-09 01:02:40.541178 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-09 01:02:40.541185 | orchestrator | Thursday 09 April 2026 01:00:49 +0000 (0:00:03.258) 0:00:55.340 ******** 2026-04-09 01:02:40.541192 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.541198 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.541204 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.541210 | orchestrator | 2026-04-09 01:02:40.541217 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-09 01:02:40.541224 | orchestrator | Thursday 09 April 2026 01:00:52 +0000 (0:00:03.321) 0:00:58.662 ******** 2026-04-09 01:02:40.541231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.541246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.541259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.541267 | orchestrator | 2026-04-09 01:02:40.541274 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-09 01:02:40.541279 | orchestrator | Thursday 09 April 2026 01:00:56 +0000 (0:00:04.057) 0:01:02.719 ******** 2026-04-09 01:02:40.541285 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:40.541291 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:40.541297 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:40.541304 | orchestrator | 2026-04-09 01:02:40.541311 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-09 01:02:40.541317 | orchestrator | Thursday 09 April 2026 01:01:04 +0000 (0:00:07.203) 0:01:09.922 ******** 2026-04-09 01:02:40.541323 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.541329 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.541337 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.541344 | orchestrator | 2026-04-09 01:02:40.541351 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-09 01:02:40.541358 | orchestrator | Thursday 09 April 2026 01:01:09 +0000 (0:00:05.298) 0:01:15.221 ******** 2026-04-09 01:02:40.541364 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.541370 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.541747 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.541757 | orchestrator | 2026-04-09 01:02:40.541766 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-09 01:02:40.541773 | orchestrator | Thursday 09 April 2026 01:01:13 +0000 (0:00:03.793) 0:01:19.014 ******** 2026-04-09 01:02:40.541779 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.541786 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.542066 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.542082 | orchestrator | 2026-04-09 01:02:40.542090 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-09 01:02:40.542109 | orchestrator | Thursday 09 April 2026 01:01:16 +0000 (0:00:03.226) 0:01:22.241 ******** 2026-04-09 01:02:40.542116 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.542123 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.542136 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.542144 | orchestrator | 2026-04-09 01:02:40.542152 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-09 01:02:40.542159 | orchestrator | Thursday 09 April 2026 01:01:19 +0000 (0:00:02.989) 0:01:25.231 ******** 2026-04-09 01:02:40.542167 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.542174 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.542180 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.542186 | orchestrator | 2026-04-09 01:02:40.542193 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-09 01:02:40.542199 | orchestrator | Thursday 09 April 2026 01:01:19 +0000 (0:00:00.482) 0:01:25.713 ******** 2026-04-09 01:02:40.542205 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 01:02:40.542213 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.542220 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 01:02:40.542227 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.542234 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-09 01:02:40.542241 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.542248 | orchestrator | 2026-04-09 01:02:40.542255 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-09 01:02:40.542261 | orchestrator | Thursday 09 April 2026 01:01:22 +0000 (0:00:02.670) 0:01:28.384 ******** 2026-04-09 01:02:40.542268 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.542274 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.542281 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.542287 | orchestrator | 2026-04-09 01:02:40.542295 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-09 01:02:40.542302 | orchestrator | Thursday 09 April 2026 01:01:25 +0000 (0:00:02.557) 0:01:30.941 ******** 2026-04-09 01:02:40.542309 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.542316 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.542323 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.542329 | orchestrator | 2026-04-09 01:02:40.542336 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-09 01:02:40.542342 | orchestrator | Thursday 09 April 2026 01:01:28 +0000 (0:00:03.066) 0:01:34.007 ******** 2026-04-09 01:02:40.542353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.542439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.542451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 01:02:40.542458 | orchestrator | 2026-04-09 01:02:40.542465 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-09 01:02:40.542471 | orchestrator | Thursday 09 April 2026 01:01:32 +0000 (0:00:04.182) 0:01:38.190 ******** 2026-04-09 01:02:40.542478 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:40.542491 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:40.542498 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:40.542505 | orchestrator | 2026-04-09 01:02:40.542513 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-09 01:02:40.542520 | orchestrator | Thursday 09 April 2026 01:01:32 +0000 (0:00:00.346) 0:01:38.537 ******** 2026-04-09 01:02:40.542527 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:40.542534 | orchestrator | 2026-04-09 01:02:40.542541 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-09 01:02:40.542548 | orchestrator | Thursday 09 April 2026 01:01:35 +0000 (0:00:02.351) 0:01:40.889 ******** 2026-04-09 01:02:40.542554 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:40.542560 | orchestrator | 2026-04-09 01:02:40.542567 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-09 01:02:40.542574 | orchestrator | Thursday 09 April 2026 01:01:37 +0000 (0:00:02.374) 0:01:43.263 ******** 2026-04-09 01:02:40.542580 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:40.542587 | orchestrator | 2026-04-09 01:02:40.542594 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-09 01:02:40.542601 | orchestrator | Thursday 09 April 2026 01:01:39 +0000 (0:00:02.414) 0:01:45.678 ******** 2026-04-09 01:02:40.542608 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:40.542615 | orchestrator | 2026-04-09 01:02:40.542621 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-09 01:02:40.542627 | orchestrator | Thursday 09 April 2026 01:02:09 +0000 (0:00:29.362) 0:02:15.040 ******** 2026-04-09 01:02:40.542634 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:40.542641 | orchestrator | 2026-04-09 01:02:40.542652 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 01:02:40.542660 | orchestrator | Thursday 09 April 2026 01:02:11 +0000 (0:00:02.194) 0:02:17.234 ******** 2026-04-09 01:02:40.542668 | orchestrator | 2026-04-09 01:02:40.542674 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 01:02:40.542685 | orchestrator | Thursday 09 April 2026 01:02:11 +0000 (0:00:00.058) 0:02:17.293 ******** 2026-04-09 01:02:40.542693 | orchestrator | 2026-04-09 01:02:40.542701 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-09 01:02:40.542711 | orchestrator | Thursday 09 April 2026 01:02:11 +0000 (0:00:00.069) 0:02:17.362 ******** 2026-04-09 01:02:40.542718 | orchestrator | 2026-04-09 01:02:40.542725 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-09 01:02:40.542732 | orchestrator | Thursday 09 April 2026 01:02:11 +0000 (0:00:00.075) 0:02:17.438 ******** 2026-04-09 01:02:40.542738 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:40.542745 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:40.542752 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:40.542758 | orchestrator | 2026-04-09 01:02:40.542765 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:02:40.542773 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-09 01:02:40.542783 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:02:40.542790 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:02:40.542797 | orchestrator | 2026-04-09 01:02:40.542803 | orchestrator | 2026-04-09 01:02:40.542810 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:02:40.542818 | orchestrator | Thursday 09 April 2026 01:02:40 +0000 (0:00:28.454) 0:02:45.893 ******** 2026-04-09 01:02:40.542825 | orchestrator | =============================================================================== 2026-04-09 01:02:40.542832 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.36s 2026-04-09 01:02:40.542845 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.46s 2026-04-09 01:02:40.542852 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.20s 2026-04-09 01:02:40.542859 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.73s 2026-04-09 01:02:40.542866 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.52s 2026-04-09 01:02:40.542875 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.30s 2026-04-09 01:02:40.542882 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.67s 2026-04-09 01:02:40.542888 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.51s 2026-04-09 01:02:40.542894 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.19s 2026-04-09 01:02:40.542901 | orchestrator | glance : Check glance containers ---------------------------------------- 4.18s 2026-04-09 01:02:40.542907 | orchestrator | glance : Copying over config.json files for services -------------------- 4.06s 2026-04-09 01:02:40.542914 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.99s 2026-04-09 01:02:40.542920 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.90s 2026-04-09 01:02:40.542926 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.89s 2026-04-09 01:02:40.542932 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.79s 2026-04-09 01:02:40.542938 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.44s 2026-04-09 01:02:40.542944 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.32s 2026-04-09 01:02:40.542951 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.26s 2026-04-09 01:02:40.542957 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.23s 2026-04-09 01:02:40.542964 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 3.07s 2026-04-09 01:02:40.542971 | orchestrator | 2026-04-09 01:02:40 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:40.542978 | orchestrator | 2026-04-09 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:43.654685 | orchestrator | 2026-04-09 01:02:43 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:02:43.654787 | orchestrator | 2026-04-09 01:02:43 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:43.654799 | orchestrator | 2026-04-09 01:02:43 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:43.654805 | orchestrator | 2026-04-09 01:02:43 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:43.654813 | orchestrator | 2026-04-09 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:46.688888 | orchestrator | 2026-04-09 01:02:46 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:02:46.689265 | orchestrator | 2026-04-09 01:02:46 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:46.691221 | orchestrator | 2026-04-09 01:02:46 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:46.692170 | orchestrator | 2026-04-09 01:02:46 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:46.692199 | orchestrator | 2026-04-09 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:49.727296 | orchestrator | 2026-04-09 01:02:49 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:02:49.728097 | orchestrator | 2026-04-09 01:02:49 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:49.728912 | orchestrator | 2026-04-09 01:02:49 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:49.729733 | orchestrator | 2026-04-09 01:02:49 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:49.729757 | orchestrator | 2026-04-09 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:52.781933 | orchestrator | 2026-04-09 01:02:52 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:02:52.783672 | orchestrator | 2026-04-09 01:02:52 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:52.785250 | orchestrator | 2026-04-09 01:02:52 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state STARTED 2026-04-09 01:02:52.787266 | orchestrator | 2026-04-09 01:02:52 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:52.787312 | orchestrator | 2026-04-09 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:55.836818 | orchestrator | 2026-04-09 01:02:55 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:02:55.839824 | orchestrator | 2026-04-09 01:02:55 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:55.839991 | orchestrator | 2026-04-09 01:02:55 | INFO  | Task 76157d6e-d78c-4931-b985-d1c9549fc422 is in state SUCCESS 2026-04-09 01:02:55.843015 | orchestrator | 2026-04-09 01:02:55.843058 | orchestrator | 2026-04-09 01:02:55.843064 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:02:55.843068 | orchestrator | 2026-04-09 01:02:55.843071 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:02:55.843075 | orchestrator | Thursday 09 April 2026 00:59:47 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-04-09 01:02:55.843078 | orchestrator | ok: [testbed-manager] 2026-04-09 01:02:55.843082 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:02:55.843085 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:02:55.843088 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:02:55.843092 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:02:55.843095 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:02:55.843098 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:02:55.843101 | orchestrator | 2026-04-09 01:02:55.843104 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:02:55.843107 | orchestrator | Thursday 09 April 2026 00:59:48 +0000 (0:00:00.696) 0:00:00.978 ******** 2026-04-09 01:02:55.843111 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-09 01:02:55.843114 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-09 01:02:55.843117 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-09 01:02:55.843120 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-09 01:02:55.843123 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-09 01:02:55.843126 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-09 01:02:55.843163 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-09 01:02:55.843167 | orchestrator | 2026-04-09 01:02:55.843170 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-09 01:02:55.843173 | orchestrator | 2026-04-09 01:02:55.843176 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-09 01:02:55.843179 | orchestrator | Thursday 09 April 2026 00:59:49 +0000 (0:00:00.851) 0:00:01.830 ******** 2026-04-09 01:02:55.843182 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:02:55.843186 | orchestrator | 2026-04-09 01:02:55.843189 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-09 01:02:55.843202 | orchestrator | Thursday 09 April 2026 00:59:50 +0000 (0:00:01.180) 0:00:03.010 ******** 2026-04-09 01:02:55.843212 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 01:02:55.843217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843224 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843242 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843271 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 01:02:55.843300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843319 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843522 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843538 | orchestrator | 2026-04-09 01:02:55.843542 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-09 01:02:55.843545 | orchestrator | Thursday 09 April 2026 00:59:54 +0000 (0:00:04.129) 0:00:07.140 ******** 2026-04-09 01:02:55.843548 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:02:55.843552 | orchestrator | 2026-04-09 01:02:55.843555 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-09 01:02:55.843558 | orchestrator | Thursday 09 April 2026 00:59:56 +0000 (0:00:01.489) 0:00:08.629 ******** 2026-04-09 01:02:55.843561 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 01:02:55.843567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843589 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.843595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843613 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843618 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843625 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 01:02:55.843633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843652 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843655 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843658 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.843673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.843721 | orchestrator | 2026-04-09 01:02:55.843725 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-09 01:02:55.843728 | orchestrator | Thursday 09 April 2026 01:00:02 +0000 (0:00:06.002) 0:00:14.631 ******** 2026-04-09 01:02:55.843732 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-09 01:02:55.843736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.843741 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843745 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-09 01:02:55.843753 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.843760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843776 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:02:55.843779 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.843782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.843788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.843803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843807 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.843811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843838 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.843847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.843852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843863 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.843868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.843877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843893 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.843929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.843933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843944 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.843947 | orchestrator | 2026-04-09 01:02:55.843950 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-09 01:02:55.843954 | orchestrator | Thursday 09 April 2026 01:00:03 +0000 (0:00:01.513) 0:00:16.145 ******** 2026-04-09 01:02:55.843957 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-09 01:02:55.843960 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.843966 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-09 01:02:55.843977 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.843986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.843992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.843995 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:02:55.844003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.844013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.844019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.844175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.844185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.844190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.844195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.844200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.844208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.844217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.844245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 01:02:55.844262 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.844268 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.844273 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.844281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.844386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.844392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.844397 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.844402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.844411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.844418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.844423 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.844428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 01:02:55.844433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.844442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 01:02:55.844447 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.844452 | orchestrator | 2026-04-09 01:02:55.844457 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-09 01:02:55.844479 | orchestrator | Thursday 09 April 2026 01:00:05 +0000 (0:00:01.936) 0:00:18.082 ******** 2026-04-09 01:02:55.844485 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 01:02:55.844491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.844500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.844506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.844510 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.844513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.844519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.844522 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.844525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.844532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.844535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.844541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.844563 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.844567 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.844573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.844576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.844580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.844623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.844629 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 01:02:55.844633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.844637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.844642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.844646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.844649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.844656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.844659 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.844664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.844668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.844671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.844674 | orchestrator | 2026-04-09 01:02:55.844678 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-09 01:02:55.844683 | orchestrator | Thursday 09 April 2026 01:00:10 +0000 (0:00:05.353) 0:00:23.435 ******** 2026-04-09 01:02:55.844688 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:02:55.844693 | orchestrator | 2026-04-09 01:02:55.844698 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-09 01:02:55.844706 | orchestrator | Thursday 09 April 2026 01:00:11 +0000 (0:00:00.851) 0:00:24.286 ******** 2026-04-09 01:02:55.844711 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327674, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9908922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844721 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327674, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9908922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844726 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327674, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9908922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844735 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327674, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9908922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844741 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327674, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9908922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.844746 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1327688, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9952862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844754 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327674, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9908922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844758 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1327688, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9952862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844765 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1327688, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9952862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844769 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1327688, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9952862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844773 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1327688, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9952862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844777 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327674, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9908922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844780 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1327670, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9887223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844785 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1327670, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9887223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844791 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1327670, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9887223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844794 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1327670, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9887223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844797 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1327670, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9887223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844800 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327684, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9941876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844805 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1327688, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9952862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844809 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327684, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9941876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844814 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1327688, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9952862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.844820 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327684, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9941876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844823 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327667, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9881988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844826 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327684, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9941876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844829 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327684, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9941876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844834 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1327670, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9887223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844838 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327667, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9881988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844841 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327667, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9881988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844848 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327667, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9881988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844851 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327676, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9911773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844855 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327667, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9881988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844858 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327676, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9911773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844862 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327676, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9911773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.844866 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327676, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9911773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845000 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327684, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9941876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845008 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1327683, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9938095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845011 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327676, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9911773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845015 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1327683, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9938095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845018 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1327683, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9938095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845023 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1327683, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9938095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845027 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327667, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9881988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845033 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1327670, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9887223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845039 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327677, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9914663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845042 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327677, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9914663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845045 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327677, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9914663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845048 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1327683, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9938095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845053 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327672, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9900675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845056 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327676, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9911773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845063 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327677, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9914663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845073 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327672, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9900675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845081 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327687, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.995105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845086 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327677, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9914663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845100 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1327683, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9938095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845107 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327672, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9900675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845112 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327672, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9900675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845184 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327663, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9867742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845199 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327677, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9914663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845206 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327687, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.995105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845209 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327687, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.995105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845212 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327701, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0022614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845220 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327672, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9900675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845288 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327672, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9900675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845292 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327687, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.995105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845305 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327663, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9867742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845309 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327663, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9867742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845312 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327687, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.995105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845315 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327686, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9946027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845320 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327684, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9941876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845326 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327687, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.995105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845329 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327663, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9867742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845343 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327669, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9884825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845348 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327663, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9867742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845353 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327663, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9867742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845411 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327701, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0022614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845421 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327701, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0022614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845431 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327701, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0022614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845436 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1327665, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.987214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845446 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327686, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9946027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845452 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327686, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9946027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845457 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327701, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0022614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845463 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327701, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0022614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845470 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327686, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9946027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845479 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327669, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9884825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845484 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327669, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9884825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845491 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327682, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9932985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845494 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327669, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9884825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845497 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327679, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9918401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845501 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327686, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9946027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845508 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327686, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9946027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845511 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1327665, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.987214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845515 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327667, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9881988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845520 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327682, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9932985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845523 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327669, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9884825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845526 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327698, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0006855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845530 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1327665, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.987214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845535 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.845541 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1327665, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.987214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845544 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327669, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9884825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845547 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327679, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9918401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845552 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1327665, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.987214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845556 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327682, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9932985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845559 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327682, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9932985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845562 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327698, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0006855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845567 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.845572 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1327665, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.987214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845575 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327682, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9932985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845579 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327679, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9918401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845584 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327679, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9918401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845587 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327682, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9932985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845590 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327679, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9918401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845596 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327698, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0006855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845599 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.845605 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327698, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0006855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845609 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.845612 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327679, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9918401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845615 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327676, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9911773, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845620 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327698, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0006855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845624 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.845627 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327698, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0006855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-09 01:02:55.845630 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.845633 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1327683, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9938095, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845638 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327677, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9914663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845644 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327672, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9900675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845647 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327687, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.995105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845650 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327663, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9867742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845656 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327701, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0022614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845659 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327686, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9946027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845665 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327669, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9884825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845668 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1327665, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.987214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845673 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327682, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9932985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845677 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327679, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9918401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845680 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327698, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693923.0006855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 01:02:55.845683 | orchestrator | 2026-04-09 01:02:55.845686 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-09 01:02:55.845690 | orchestrator | Thursday 09 April 2026 01:00:36 +0000 (0:00:24.573) 0:00:48.860 ******** 2026-04-09 01:02:55.845693 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:02:55.845696 | orchestrator | 2026-04-09 01:02:55.845700 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-09 01:02:55.845704 | orchestrator | Thursday 09 April 2026 01:00:37 +0000 (0:00:01.057) 0:00:49.919 ******** 2026-04-09 01:02:55.845708 | orchestrator | [WARNING]: Skipped 2026-04-09 01:02:55.845711 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845715 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-09 01:02:55.845718 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845723 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-09 01:02:55.845726 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:02:55.845729 | orchestrator | [WARNING]: Skipped 2026-04-09 01:02:55.845732 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845735 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-09 01:02:55.845738 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845742 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-09 01:02:55.845745 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:02:55.845748 | orchestrator | [WARNING]: Skipped 2026-04-09 01:02:55.845751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845754 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-09 01:02:55.845757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845760 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-09 01:02:55.845763 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 01:02:55.845767 | orchestrator | [WARNING]: Skipped 2026-04-09 01:02:55.845770 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845773 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-09 01:02:55.845776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845779 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-09 01:02:55.845782 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 01:02:55.845785 | orchestrator | [WARNING]: Skipped 2026-04-09 01:02:55.845788 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845791 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-09 01:02:55.845794 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845797 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-09 01:02:55.845800 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 01:02:55.845803 | orchestrator | [WARNING]: Skipped 2026-04-09 01:02:55.845807 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845810 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-09 01:02:55.845813 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845816 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-09 01:02:55.845819 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 01:02:55.845822 | orchestrator | [WARNING]: Skipped 2026-04-09 01:02:55.845825 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845830 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-09 01:02:55.845833 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 01:02:55.845836 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-09 01:02:55.845839 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 01:02:55.845842 | orchestrator | 2026-04-09 01:02:55.845845 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-09 01:02:55.845848 | orchestrator | Thursday 09 April 2026 01:00:40 +0000 (0:00:03.259) 0:00:53.178 ******** 2026-04-09 01:02:55.845851 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:02:55.845854 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.845857 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:02:55.845861 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.845864 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:02:55.845869 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.845872 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:02:55.845875 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.845878 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:02:55.845881 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.845884 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 01:02:55.845887 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.845890 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-09 01:02:55.845893 | orchestrator | 2026-04-09 01:02:55.845896 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-09 01:02:55.845899 | orchestrator | Thursday 09 April 2026 01:00:56 +0000 (0:00:16.210) 0:01:09.389 ******** 2026-04-09 01:02:55.845902 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:02:55.845905 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.845910 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:02:55.845913 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.845916 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:02:55.845920 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.845923 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:02:55.845926 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.845929 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:02:55.845932 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.845936 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 01:02:55.845940 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.845943 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-09 01:02:55.845947 | orchestrator | 2026-04-09 01:02:55.845951 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-09 01:02:55.845955 | orchestrator | Thursday 09 April 2026 01:01:00 +0000 (0:00:03.655) 0:01:13.044 ******** 2026-04-09 01:02:55.845959 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:02:55.845963 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.845966 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:02:55.845970 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.845974 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:02:55.845978 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:02:55.845981 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.845985 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.845988 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:02:55.845992 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.845996 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 01:02:55.846002 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.846006 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-09 01:02:55.846009 | orchestrator | 2026-04-09 01:02:55.846035 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-09 01:02:55.846040 | orchestrator | Thursday 09 April 2026 01:01:02 +0000 (0:00:02.161) 0:01:15.206 ******** 2026-04-09 01:02:55.846044 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:02:55.846047 | orchestrator | 2026-04-09 01:02:55.846053 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-09 01:02:55.846057 | orchestrator | Thursday 09 April 2026 01:01:03 +0000 (0:00:00.849) 0:01:16.056 ******** 2026-04-09 01:02:55.846060 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:02:55.846064 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.846068 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.846071 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.846075 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.846079 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.846082 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.846086 | orchestrator | 2026-04-09 01:02:55.846090 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-09 01:02:55.846093 | orchestrator | Thursday 09 April 2026 01:01:04 +0000 (0:00:00.601) 0:01:16.657 ******** 2026-04-09 01:02:55.846097 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:02:55.846101 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.846105 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.846108 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.846112 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.846116 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:55.846119 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:55.846123 | orchestrator | 2026-04-09 01:02:55.846127 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-09 01:02:55.846130 | orchestrator | Thursday 09 April 2026 01:01:06 +0000 (0:00:02.481) 0:01:19.138 ******** 2026-04-09 01:02:55.846134 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:02:55.846137 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.846141 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:02:55.846145 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.846149 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:02:55.846153 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.846157 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:02:55.846160 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:02:55.846164 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:02:55.846168 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.846173 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:02:55.846177 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.846181 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 01:02:55.846185 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.846188 | orchestrator | 2026-04-09 01:02:55.846192 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-09 01:02:55.846195 | orchestrator | Thursday 09 April 2026 01:01:08 +0000 (0:00:02.086) 0:01:21.225 ******** 2026-04-09 01:02:55.846199 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:02:55.846203 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:02:55.846210 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.846213 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.846217 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-09 01:02:55.846221 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:02:55.846224 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.846228 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:02:55.846232 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.846236 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:02:55.846239 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.846243 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 01:02:55.846247 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.846250 | orchestrator | 2026-04-09 01:02:55.846254 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-09 01:02:55.846258 | orchestrator | Thursday 09 April 2026 01:01:11 +0000 (0:00:02.245) 0:01:23.470 ******** 2026-04-09 01:02:55.846262 | orchestrator | [WARNING]: Skipped 2026-04-09 01:02:55.846265 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-09 01:02:55.846269 | orchestrator | due to this access issue: 2026-04-09 01:02:55.846273 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-09 01:02:55.846276 | orchestrator | not a directory 2026-04-09 01:02:55.846280 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:02:55.846283 | orchestrator | 2026-04-09 01:02:55.846287 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-09 01:02:55.846291 | orchestrator | Thursday 09 April 2026 01:01:12 +0000 (0:00:01.077) 0:01:24.548 ******** 2026-04-09 01:02:55.846294 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:02:55.846298 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.846302 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.846305 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.846309 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.846313 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.846317 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.846320 | orchestrator | 2026-04-09 01:02:55.846326 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-09 01:02:55.846329 | orchestrator | Thursday 09 April 2026 01:01:13 +0000 (0:00:01.220) 0:01:25.768 ******** 2026-04-09 01:02:55.846333 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:02:55.846337 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:55.846341 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:55.846344 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:55.846348 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:02:55.846352 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:02:55.846356 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:02:55.846391 | orchestrator | 2026-04-09 01:02:55.846396 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-09 01:02:55.846401 | orchestrator | Thursday 09 April 2026 01:01:14 +0000 (0:00:01.246) 0:01:27.015 ******** 2026-04-09 01:02:55.846406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.846416 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-09 01:02:55.846420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.846423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.846426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.846430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.846435 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.846438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 01:02:55.846445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.846450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.846454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.846457 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.846460 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.846464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.846470 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-09 01:02:55.846476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.846481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.846485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.846488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.846491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.846494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.846499 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.846503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.846508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.846513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.846516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 01:02:55.846519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.846523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.846526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 01:02:55.846529 | orchestrator | 2026-04-09 01:02:55.846532 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-09 01:02:55.846537 | orchestrator | Thursday 09 April 2026 01:01:19 +0000 (0:00:05.261) 0:01:32.276 ******** 2026-04-09 01:02:55.846540 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 01:02:55.846545 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:02:55.846548 | orchestrator | 2026-04-09 01:02:55.846551 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:02:55.846554 | orchestrator | Thursday 09 April 2026 01:01:21 +0000 (0:00:01.494) 0:01:33.771 ******** 2026-04-09 01:02:55.846557 | orchestrator | 2026-04-09 01:02:55.846561 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:02:55.846564 | orchestrator | Thursday 09 April 2026 01:01:21 +0000 (0:00:00.065) 0:01:33.837 ******** 2026-04-09 01:02:55.846567 | orchestrator | 2026-04-09 01:02:55.846570 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:02:55.846573 | orchestrator | Thursday 09 April 2026 01:01:21 +0000 (0:00:00.063) 0:01:33.900 ******** 2026-04-09 01:02:55.846576 | orchestrator | 2026-04-09 01:02:55.846579 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:02:55.846582 | orchestrator | Thursday 09 April 2026 01:01:21 +0000 (0:00:00.061) 0:01:33.962 ******** 2026-04-09 01:02:55.846585 | orchestrator | 2026-04-09 01:02:55.846588 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:02:55.846591 | orchestrator | Thursday 09 April 2026 01:01:21 +0000 (0:00:00.060) 0:01:34.023 ******** 2026-04-09 01:02:55.846594 | orchestrator | 2026-04-09 01:02:55.846597 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:02:55.846600 | orchestrator | Thursday 09 April 2026 01:01:21 +0000 (0:00:00.061) 0:01:34.084 ******** 2026-04-09 01:02:55.846603 | orchestrator | 2026-04-09 01:02:55.846606 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 01:02:55.846609 | orchestrator | Thursday 09 April 2026 01:01:21 +0000 (0:00:00.064) 0:01:34.148 ******** 2026-04-09 01:02:55.846612 | orchestrator | 2026-04-09 01:02:55.846615 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-09 01:02:55.846618 | orchestrator | Thursday 09 April 2026 01:01:21 +0000 (0:00:00.106) 0:01:34.254 ******** 2026-04-09 01:02:55.846621 | orchestrator | changed: [testbed-manager] 2026-04-09 01:02:55.846624 | orchestrator | 2026-04-09 01:02:55.846627 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-09 01:02:55.846630 | orchestrator | Thursday 09 April 2026 01:01:37 +0000 (0:00:15.938) 0:01:50.193 ******** 2026-04-09 01:02:55.846633 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:55.846636 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.846641 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:02:55.846645 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:02:55.846648 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:02:55.846651 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:55.846654 | orchestrator | changed: [testbed-manager] 2026-04-09 01:02:55.846657 | orchestrator | 2026-04-09 01:02:55.846660 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-09 01:02:55.846663 | orchestrator | Thursday 09 April 2026 01:01:51 +0000 (0:00:14.182) 0:02:04.375 ******** 2026-04-09 01:02:55.846666 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:55.846669 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.846672 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:55.846675 | orchestrator | 2026-04-09 01:02:55.846678 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-09 01:02:55.846681 | orchestrator | Thursday 09 April 2026 01:01:57 +0000 (0:00:05.808) 0:02:10.183 ******** 2026-04-09 01:02:55.846684 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.846687 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:55.846690 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:55.846693 | orchestrator | 2026-04-09 01:02:55.846696 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-09 01:02:55.846699 | orchestrator | Thursday 09 April 2026 01:02:02 +0000 (0:00:04.586) 0:02:14.770 ******** 2026-04-09 01:02:55.846702 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.846707 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:55.846710 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:02:55.846713 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:02:55.846716 | orchestrator | changed: [testbed-manager] 2026-04-09 01:02:55.846719 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:02:55.846722 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:55.846725 | orchestrator | 2026-04-09 01:02:55.846729 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-09 01:02:55.846732 | orchestrator | Thursday 09 April 2026 01:02:16 +0000 (0:00:13.970) 0:02:28.741 ******** 2026-04-09 01:02:55.846735 | orchestrator | changed: [testbed-manager] 2026-04-09 01:02:55.846738 | orchestrator | 2026-04-09 01:02:55.846741 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-09 01:02:55.846744 | orchestrator | Thursday 09 April 2026 01:02:28 +0000 (0:00:12.244) 0:02:40.985 ******** 2026-04-09 01:02:55.846747 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:55.846750 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:55.846753 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:55.846756 | orchestrator | 2026-04-09 01:02:55.846759 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-09 01:02:55.846762 | orchestrator | Thursday 09 April 2026 01:02:39 +0000 (0:00:10.832) 0:02:51.818 ******** 2026-04-09 01:02:55.846765 | orchestrator | changed: [testbed-manager] 2026-04-09 01:02:55.846768 | orchestrator | 2026-04-09 01:02:55.846771 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-09 01:02:55.846774 | orchestrator | Thursday 09 April 2026 01:02:44 +0000 (0:00:04.685) 0:02:56.503 ******** 2026-04-09 01:02:55.846777 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:02:55.846780 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:02:55.846783 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:02:55.846786 | orchestrator | 2026-04-09 01:02:55.846789 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:02:55.846792 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 01:02:55.846797 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 01:02:55.846800 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 01:02:55.846803 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 01:02:55.846807 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 01:02:55.846810 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 01:02:55.846813 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-09 01:02:55.846816 | orchestrator | 2026-04-09 01:02:55.846819 | orchestrator | 2026-04-09 01:02:55.846822 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:02:55.846825 | orchestrator | Thursday 09 April 2026 01:02:55 +0000 (0:00:11.062) 0:03:07.566 ******** 2026-04-09 01:02:55.846828 | orchestrator | =============================================================================== 2026-04-09 01:02:55.846831 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.57s 2026-04-09 01:02:55.846834 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.21s 2026-04-09 01:02:55.846837 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.94s 2026-04-09 01:02:55.846842 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.18s 2026-04-09 01:02:55.846845 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.97s 2026-04-09 01:02:55.846848 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.24s 2026-04-09 01:02:55.846853 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.06s 2026-04-09 01:02:55.846856 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.83s 2026-04-09 01:02:55.846859 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.00s 2026-04-09 01:02:55.846862 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.81s 2026-04-09 01:02:55.846865 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.35s 2026-04-09 01:02:55.846868 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.26s 2026-04-09 01:02:55.846871 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.69s 2026-04-09 01:02:55.846874 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 4.59s 2026-04-09 01:02:55.846877 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.13s 2026-04-09 01:02:55.846880 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.66s 2026-04-09 01:02:55.846883 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.26s 2026-04-09 01:02:55.846886 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.48s 2026-04-09 01:02:55.846889 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.25s 2026-04-09 01:02:55.846892 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.16s 2026-04-09 01:02:55.846895 | orchestrator | 2026-04-09 01:02:55 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state STARTED 2026-04-09 01:02:55.846898 | orchestrator | 2026-04-09 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:58.882862 | orchestrator | 2026-04-09 01:02:58 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:02:58.883202 | orchestrator | 2026-04-09 01:02:58 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:02:58.883953 | orchestrator | 2026-04-09 01:02:58 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:02:58.884579 | orchestrator | 2026-04-09 01:02:58 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:02:58.886108 | orchestrator | 2026-04-09 01:02:58 | INFO  | Task 2d20a190-d81e-48c2-b2de-b5786acc8cc4 is in state SUCCESS 2026-04-09 01:02:58.887008 | orchestrator | 2026-04-09 01:02:58.887031 | orchestrator | 2026-04-09 01:02:58.887040 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:02:58.887051 | orchestrator | 2026-04-09 01:02:58.887058 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:02:58.887066 | orchestrator | Thursday 09 April 2026 00:59:57 +0000 (0:00:00.577) 0:00:00.577 ******** 2026-04-09 01:02:58.887073 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:02:58.887080 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:02:58.887086 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:02:58.887093 | orchestrator | 2026-04-09 01:02:58.887101 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:02:58.887108 | orchestrator | Thursday 09 April 2026 00:59:57 +0000 (0:00:00.417) 0:00:00.995 ******** 2026-04-09 01:02:58.887124 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-09 01:02:58.887131 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-09 01:02:58.887138 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-09 01:02:58.887144 | orchestrator | 2026-04-09 01:02:58.887148 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-09 01:02:58.887163 | orchestrator | 2026-04-09 01:02:58.887168 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 01:02:58.887172 | orchestrator | Thursday 09 April 2026 00:59:58 +0000 (0:00:00.417) 0:00:01.412 ******** 2026-04-09 01:02:58.887176 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:02:58.887181 | orchestrator | 2026-04-09 01:02:58.887185 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-09 01:02:58.887189 | orchestrator | Thursday 09 April 2026 00:59:58 +0000 (0:00:00.589) 0:00:02.002 ******** 2026-04-09 01:02:58.887193 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-09 01:02:58.887198 | orchestrator | 2026-04-09 01:02:58.887202 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-09 01:02:58.887206 | orchestrator | Thursday 09 April 2026 01:00:03 +0000 (0:00:04.407) 0:00:06.409 ******** 2026-04-09 01:02:58.887210 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-09 01:02:58.887215 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-09 01:02:58.887219 | orchestrator | 2026-04-09 01:02:58.887223 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-09 01:02:58.887227 | orchestrator | Thursday 09 April 2026 01:00:09 +0000 (0:00:06.666) 0:00:13.075 ******** 2026-04-09 01:02:58.887231 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:02:58.887235 | orchestrator | 2026-04-09 01:02:58.887240 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-09 01:02:58.887244 | orchestrator | Thursday 09 April 2026 01:00:13 +0000 (0:00:03.349) 0:00:16.425 ******** 2026-04-09 01:02:58.887248 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-09 01:02:58.887252 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:02:58.887257 | orchestrator | 2026-04-09 01:02:58.887261 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-09 01:02:58.887265 | orchestrator | Thursday 09 April 2026 01:00:17 +0000 (0:00:04.252) 0:00:20.678 ******** 2026-04-09 01:02:58.887269 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:02:58.887273 | orchestrator | 2026-04-09 01:02:58.887277 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-09 01:02:58.887281 | orchestrator | Thursday 09 April 2026 01:00:21 +0000 (0:00:03.569) 0:00:24.247 ******** 2026-04-09 01:02:58.887285 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-09 01:02:58.887289 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-09 01:02:58.887293 | orchestrator | 2026-04-09 01:02:58.887297 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-09 01:02:58.887301 | orchestrator | Thursday 09 April 2026 01:00:29 +0000 (0:00:08.427) 0:00:32.675 ******** 2026-04-09 01:02:58.887308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.887321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.887331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.887336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.887340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.887345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.887560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.887586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.887592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.887597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.887602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.887606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.887611 | orchestrator | 2026-04-09 01:02:58.887636 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 01:02:58.887648 | orchestrator | Thursday 09 April 2026 01:00:33 +0000 (0:00:03.610) 0:00:36.285 ******** 2026-04-09 01:02:58.887652 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:58.887657 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:58.887661 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:58.887665 | orchestrator | 2026-04-09 01:02:58.887669 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 01:02:58.887673 | orchestrator | Thursday 09 April 2026 01:00:33 +0000 (0:00:00.268) 0:00:36.554 ******** 2026-04-09 01:02:58.887678 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:02:58.887682 | orchestrator | 2026-04-09 01:02:58.887686 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-09 01:02:58.887694 | orchestrator | Thursday 09 April 2026 01:00:33 +0000 (0:00:00.479) 0:00:37.034 ******** 2026-04-09 01:02:58.887698 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-09 01:02:58.887703 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-09 01:02:58.887707 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-09 01:02:58.887711 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-09 01:02:58.887716 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-09 01:02:58.887720 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-09 01:02:58.887724 | orchestrator | 2026-04-09 01:02:58.887728 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-09 01:02:58.887735 | orchestrator | Thursday 09 April 2026 01:00:35 +0000 (0:00:01.870) 0:00:38.904 ******** 2026-04-09 01:02:58.887740 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 01:02:58.887745 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 01:02:58.887750 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 01:02:58.887757 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 01:02:58.887767 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 01:02:58.887772 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-09 01:02:58.887777 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 01:02:58.887782 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 01:02:58.887789 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 01:02:58.887796 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 01:02:58.887803 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 01:02:58.887809 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-09 01:02:58.887816 | orchestrator | 2026-04-09 01:02:58.887825 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-09 01:02:58.887835 | orchestrator | Thursday 09 April 2026 01:00:39 +0000 (0:00:04.184) 0:00:43.089 ******** 2026-04-09 01:02:58.888015 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 01:02:58.888024 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 01:02:58.888029 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-09 01:02:58.888033 | orchestrator | 2026-04-09 01:02:58.888037 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-09 01:02:58.888046 | orchestrator | Thursday 09 April 2026 01:00:42 +0000 (0:00:02.036) 0:00:45.126 ******** 2026-04-09 01:02:58.888050 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-09 01:02:58.888054 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-09 01:02:58.888058 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-09 01:02:58.888062 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 01:02:58.888066 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 01:02:58.888071 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 01:02:58.888075 | orchestrator | 2026-04-09 01:02:58.888079 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-09 01:02:58.888083 | orchestrator | Thursday 09 April 2026 01:00:44 +0000 (0:00:02.930) 0:00:48.056 ******** 2026-04-09 01:02:58.888087 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-09 01:02:58.888092 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-09 01:02:58.888096 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-09 01:02:58.888100 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-09 01:02:58.888104 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-09 01:02:58.888108 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-09 01:02:58.888112 | orchestrator | 2026-04-09 01:02:58.888116 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-09 01:02:58.888121 | orchestrator | Thursday 09 April 2026 01:00:46 +0000 (0:00:01.158) 0:00:49.215 ******** 2026-04-09 01:02:58.888125 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:58.888129 | orchestrator | 2026-04-09 01:02:58.888133 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-09 01:02:58.888137 | orchestrator | Thursday 09 April 2026 01:00:46 +0000 (0:00:00.193) 0:00:49.409 ******** 2026-04-09 01:02:58.888141 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:58.888145 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:58.888149 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:58.888161 | orchestrator | 2026-04-09 01:02:58.888170 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 01:02:58.888174 | orchestrator | Thursday 09 April 2026 01:00:46 +0000 (0:00:00.238) 0:00:49.648 ******** 2026-04-09 01:02:58.888178 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:02:58.888197 | orchestrator | 2026-04-09 01:02:58.888202 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-09 01:02:58.888207 | orchestrator | Thursday 09 April 2026 01:00:47 +0000 (0:00:00.858) 0:00:50.506 ******** 2026-04-09 01:02:58.888215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.888220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.888228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.888232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888416 | orchestrator | 2026-04-09 01:02:58.888423 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-09 01:02:58.888428 | orchestrator | Thursday 09 April 2026 01:00:51 +0000 (0:00:04.464) 0:00:54.971 ******** 2026-04-09 01:02:58.888437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 01:02:58.888441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888454 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:58.888465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 01:02:58.888470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888486 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:58.888490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 01:02:58.888497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888514 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:58.888519 | orchestrator | 2026-04-09 01:02:58.888523 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-09 01:02:58.888527 | orchestrator | Thursday 09 April 2026 01:00:52 +0000 (0:00:01.041) 0:00:56.012 ******** 2026-04-09 01:02:58.888531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 01:02:58.888536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888555 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:58.888561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 01:02:58.888566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888578 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:58.888585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 01:02:58.888594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888607 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:58.888611 | orchestrator | 2026-04-09 01:02:58.888616 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-09 01:02:58.888620 | orchestrator | Thursday 09 April 2026 01:00:54 +0000 (0:00:01.239) 0:00:57.251 ******** 2026-04-09 01:02:58.888625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.888638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.888655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.888662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888738 | orchestrator | 2026-04-09 01:02:58.888745 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-09 01:02:58.888752 | orchestrator | Thursday 09 April 2026 01:00:58 +0000 (0:00:04.639) 0:01:01.891 ******** 2026-04-09 01:02:58.888760 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-09 01:02:58.888767 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-09 01:02:58.888772 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-09 01:02:58.888781 | orchestrator | 2026-04-09 01:02:58.888786 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-09 01:02:58.888791 | orchestrator | Thursday 09 April 2026 01:01:01 +0000 (0:00:02.281) 0:01:04.173 ******** 2026-04-09 01:02:58.888800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.888808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.888813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.888819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.888880 | orchestrator | 2026-04-09 01:02:58.888889 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-09 01:02:58.888899 | orchestrator | Thursday 09 April 2026 01:01:15 +0000 (0:00:14.343) 0:01:18.516 ******** 2026-04-09 01:02:58.888906 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:58.888913 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:58.888920 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:58.888928 | orchestrator | 2026-04-09 01:02:58.888936 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-09 01:02:58.888943 | orchestrator | Thursday 09 April 2026 01:01:17 +0000 (0:00:02.020) 0:01:20.537 ******** 2026-04-09 01:02:58.888951 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:58.888958 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:58.888967 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:58.888972 | orchestrator | 2026-04-09 01:02:58.888977 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-09 01:02:58.888982 | orchestrator | Thursday 09 April 2026 01:01:19 +0000 (0:00:01.755) 0:01:22.293 ******** 2026-04-09 01:02:58.888987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 01:02:58.888993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.888998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.889008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.889013 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:58.889023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 01:02:58.889029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.889034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.889038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.889045 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:58.889049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-09 01:02:58.889054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.889063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.889068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 01:02:58.889072 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:58.889077 | orchestrator | 2026-04-09 01:02:58.889081 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-09 01:02:58.889085 | orchestrator | Thursday 09 April 2026 01:01:19 +0000 (0:00:00.751) 0:01:23.045 ******** 2026-04-09 01:02:58.889089 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:58.889094 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:58.889098 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:58.889102 | orchestrator | 2026-04-09 01:02:58.889112 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-09 01:02:58.889116 | orchestrator | Thursday 09 April 2026 01:01:20 +0000 (0:00:00.393) 0:01:23.438 ******** 2026-04-09 01:02:58.889121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.889128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.889137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-09 01:02:58.889144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.889148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.889153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.889160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.889165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.889172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.889178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.889183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.889191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-09 01:02:58.889196 | orchestrator | 2026-04-09 01:02:58.889200 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-09 01:02:58.889204 | orchestrator | Thursday 09 April 2026 01:01:23 +0000 (0:00:03.580) 0:01:27.019 ******** 2026-04-09 01:02:58.889209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:58.889213 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:02:58.889217 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:02:58.889221 | orchestrator | 2026-04-09 01:02:58.889226 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-09 01:02:58.889230 | orchestrator | Thursday 09 April 2026 01:01:24 +0000 (0:00:00.318) 0:01:27.337 ******** 2026-04-09 01:02:58.889234 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:58.889238 | orchestrator | 2026-04-09 01:02:58.889242 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-09 01:02:58.889246 | orchestrator | Thursday 09 April 2026 01:01:26 +0000 (0:00:02.300) 0:01:29.638 ******** 2026-04-09 01:02:58.889251 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:58.889255 | orchestrator | 2026-04-09 01:02:58.889259 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-09 01:02:58.889263 | orchestrator | Thursday 09 April 2026 01:01:29 +0000 (0:00:02.493) 0:01:32.131 ******** 2026-04-09 01:02:58.889267 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:58.889271 | orchestrator | 2026-04-09 01:02:58.889276 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 01:02:58.889280 | orchestrator | Thursday 09 April 2026 01:01:50 +0000 (0:00:21.472) 0:01:53.604 ******** 2026-04-09 01:02:58.889284 | orchestrator | 2026-04-09 01:02:58.889288 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 01:02:58.889293 | orchestrator | Thursday 09 April 2026 01:01:50 +0000 (0:00:00.064) 0:01:53.668 ******** 2026-04-09 01:02:58.889297 | orchestrator | 2026-04-09 01:02:58.889301 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-09 01:02:58.889305 | orchestrator | Thursday 09 April 2026 01:01:50 +0000 (0:00:00.063) 0:01:53.732 ******** 2026-04-09 01:02:58.889309 | orchestrator | 2026-04-09 01:02:58.889313 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-09 01:02:58.889317 | orchestrator | Thursday 09 April 2026 01:01:50 +0000 (0:00:00.062) 0:01:53.794 ******** 2026-04-09 01:02:58.889322 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:58.889326 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:58.889339 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:58.889368 | orchestrator | 2026-04-09 01:02:58.889376 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-09 01:02:58.889387 | orchestrator | Thursday 09 April 2026 01:02:16 +0000 (0:00:25.531) 0:02:19.326 ******** 2026-04-09 01:02:58.889395 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:58.889402 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:58.889409 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:58.889416 | orchestrator | 2026-04-09 01:02:58.889423 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-09 01:02:58.889430 | orchestrator | Thursday 09 April 2026 01:02:22 +0000 (0:00:06.683) 0:02:26.009 ******** 2026-04-09 01:02:58.889442 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:58.889446 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:58.889451 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:58.889455 | orchestrator | 2026-04-09 01:02:58.889459 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-09 01:02:58.889465 | orchestrator | Thursday 09 April 2026 01:02:42 +0000 (0:00:20.050) 0:02:46.059 ******** 2026-04-09 01:02:58.889470 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:02:58.889474 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:02:58.889478 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:02:58.889482 | orchestrator | 2026-04-09 01:02:58.889487 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-09 01:02:58.889491 | orchestrator | Thursday 09 April 2026 01:02:55 +0000 (0:00:12.491) 0:02:58.551 ******** 2026-04-09 01:02:58.889495 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:02:58.889499 | orchestrator | 2026-04-09 01:02:58.889504 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:02:58.889508 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 01:02:58.889513 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:02:58.889517 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:02:58.889521 | orchestrator | 2026-04-09 01:02:58.889525 | orchestrator | 2026-04-09 01:02:58.889530 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:02:58.889534 | orchestrator | Thursday 09 April 2026 01:02:55 +0000 (0:00:00.222) 0:02:58.774 ******** 2026-04-09 01:02:58.889541 | orchestrator | =============================================================================== 2026-04-09 01:02:58.889550 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.53s 2026-04-09 01:02:58.889559 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.47s 2026-04-09 01:02:58.889565 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 20.05s 2026-04-09 01:02:58.889572 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.34s 2026-04-09 01:02:58.889578 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.49s 2026-04-09 01:02:58.889585 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.43s 2026-04-09 01:02:58.889591 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.68s 2026-04-09 01:02:58.889596 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.67s 2026-04-09 01:02:58.889602 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.64s 2026-04-09 01:02:58.889608 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.46s 2026-04-09 01:02:58.889614 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.41s 2026-04-09 01:02:58.889620 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.25s 2026-04-09 01:02:58.889626 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.18s 2026-04-09 01:02:58.889632 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.61s 2026-04-09 01:02:58.889639 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.58s 2026-04-09 01:02:58.889645 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.57s 2026-04-09 01:02:58.889652 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.35s 2026-04-09 01:02:58.889658 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.93s 2026-04-09 01:02:58.889665 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.49s 2026-04-09 01:02:58.889679 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.30s 2026-04-09 01:02:58.889686 | orchestrator | 2026-04-09 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:01.923847 | orchestrator | 2026-04-09 01:03:01 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:01.925527 | orchestrator | 2026-04-09 01:03:01 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:01.927769 | orchestrator | 2026-04-09 01:03:01 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:01.929435 | orchestrator | 2026-04-09 01:03:01 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:01.929584 | orchestrator | 2026-04-09 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:04.968594 | orchestrator | 2026-04-09 01:03:04 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:04.970779 | orchestrator | 2026-04-09 01:03:04 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:04.972779 | orchestrator | 2026-04-09 01:03:04 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:04.974673 | orchestrator | 2026-04-09 01:03:04 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:04.974725 | orchestrator | 2026-04-09 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:08.015050 | orchestrator | 2026-04-09 01:03:08 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:08.015702 | orchestrator | 2026-04-09 01:03:08 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:08.018485 | orchestrator | 2026-04-09 01:03:08 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:08.019720 | orchestrator | 2026-04-09 01:03:08 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:08.019766 | orchestrator | 2026-04-09 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:11.063858 | orchestrator | 2026-04-09 01:03:11 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:11.065783 | orchestrator | 2026-04-09 01:03:11 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:11.066795 | orchestrator | 2026-04-09 01:03:11 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:11.068394 | orchestrator | 2026-04-09 01:03:11 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:11.068506 | orchestrator | 2026-04-09 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:14.105501 | orchestrator | 2026-04-09 01:03:14 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:14.108561 | orchestrator | 2026-04-09 01:03:14 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:14.110229 | orchestrator | 2026-04-09 01:03:14 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:14.113373 | orchestrator | 2026-04-09 01:03:14 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:14.114080 | orchestrator | 2026-04-09 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:17.164136 | orchestrator | 2026-04-09 01:03:17 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:17.164304 | orchestrator | 2026-04-09 01:03:17 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:17.165795 | orchestrator | 2026-04-09 01:03:17 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:17.168995 | orchestrator | 2026-04-09 01:03:17 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:17.169055 | orchestrator | 2026-04-09 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:20.210427 | orchestrator | 2026-04-09 01:03:20 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:20.211536 | orchestrator | 2026-04-09 01:03:20 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:20.213184 | orchestrator | 2026-04-09 01:03:20 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:20.214364 | orchestrator | 2026-04-09 01:03:20 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:20.214415 | orchestrator | 2026-04-09 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:23.259297 | orchestrator | 2026-04-09 01:03:23 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:23.259518 | orchestrator | 2026-04-09 01:03:23 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:23.260221 | orchestrator | 2026-04-09 01:03:23 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:23.261088 | orchestrator | 2026-04-09 01:03:23 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:23.261375 | orchestrator | 2026-04-09 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:26.293933 | orchestrator | 2026-04-09 01:03:26 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:26.295567 | orchestrator | 2026-04-09 01:03:26 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:26.296593 | orchestrator | 2026-04-09 01:03:26 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:26.299183 | orchestrator | 2026-04-09 01:03:26 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:26.299215 | orchestrator | 2026-04-09 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:29.348784 | orchestrator | 2026-04-09 01:03:29 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:29.351306 | orchestrator | 2026-04-09 01:03:29 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:29.352047 | orchestrator | 2026-04-09 01:03:29 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:29.352743 | orchestrator | 2026-04-09 01:03:29 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:29.352782 | orchestrator | 2026-04-09 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:32.413871 | orchestrator | 2026-04-09 01:03:32 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:32.479671 | orchestrator | 2026-04-09 01:03:32 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:32.479721 | orchestrator | 2026-04-09 01:03:32 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:32.479728 | orchestrator | 2026-04-09 01:03:32 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:32.479733 | orchestrator | 2026-04-09 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:35.441666 | orchestrator | 2026-04-09 01:03:35 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:35.442185 | orchestrator | 2026-04-09 01:03:35 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:35.442861 | orchestrator | 2026-04-09 01:03:35 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:35.443575 | orchestrator | 2026-04-09 01:03:35 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:35.443736 | orchestrator | 2026-04-09 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:38.473829 | orchestrator | 2026-04-09 01:03:38 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:38.473906 | orchestrator | 2026-04-09 01:03:38 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:38.474496 | orchestrator | 2026-04-09 01:03:38 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:38.475129 | orchestrator | 2026-04-09 01:03:38 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:38.475151 | orchestrator | 2026-04-09 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:41.513099 | orchestrator | 2026-04-09 01:03:41 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:41.513192 | orchestrator | 2026-04-09 01:03:41 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:41.513728 | orchestrator | 2026-04-09 01:03:41 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:41.514265 | orchestrator | 2026-04-09 01:03:41 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:41.514395 | orchestrator | 2026-04-09 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:44.541126 | orchestrator | 2026-04-09 01:03:44 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:44.545853 | orchestrator | 2026-04-09 01:03:44 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:44.546542 | orchestrator | 2026-04-09 01:03:44 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:44.547486 | orchestrator | 2026-04-09 01:03:44 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:44.547520 | orchestrator | 2026-04-09 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:47.576117 | orchestrator | 2026-04-09 01:03:47 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:47.577049 | orchestrator | 2026-04-09 01:03:47 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:47.578201 | orchestrator | 2026-04-09 01:03:47 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:47.580266 | orchestrator | 2026-04-09 01:03:47 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:47.580408 | orchestrator | 2026-04-09 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:50.617135 | orchestrator | 2026-04-09 01:03:50 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:50.617405 | orchestrator | 2026-04-09 01:03:50 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:50.618165 | orchestrator | 2026-04-09 01:03:50 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:50.619837 | orchestrator | 2026-04-09 01:03:50 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:50.619903 | orchestrator | 2026-04-09 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:53.650409 | orchestrator | 2026-04-09 01:03:53 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:53.651053 | orchestrator | 2026-04-09 01:03:53 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:53.652854 | orchestrator | 2026-04-09 01:03:53 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:53.653383 | orchestrator | 2026-04-09 01:03:53 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:53.653408 | orchestrator | 2026-04-09 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:56.683874 | orchestrator | 2026-04-09 01:03:56 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:56.685374 | orchestrator | 2026-04-09 01:03:56 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:56.685967 | orchestrator | 2026-04-09 01:03:56 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:56.686704 | orchestrator | 2026-04-09 01:03:56 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:56.688635 | orchestrator | 2026-04-09 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:59.729731 | orchestrator | 2026-04-09 01:03:59 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:03:59.730456 | orchestrator | 2026-04-09 01:03:59 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:03:59.731439 | orchestrator | 2026-04-09 01:03:59 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:03:59.732228 | orchestrator | 2026-04-09 01:03:59 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:03:59.732253 | orchestrator | 2026-04-09 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:02.759632 | orchestrator | 2026-04-09 01:04:02 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:02.759975 | orchestrator | 2026-04-09 01:04:02 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:02.760806 | orchestrator | 2026-04-09 01:04:02 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:02.761451 | orchestrator | 2026-04-09 01:04:02 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:02.761515 | orchestrator | 2026-04-09 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:05.795854 | orchestrator | 2026-04-09 01:04:05 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:05.796123 | orchestrator | 2026-04-09 01:04:05 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:05.796893 | orchestrator | 2026-04-09 01:04:05 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:05.797784 | orchestrator | 2026-04-09 01:04:05 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:05.797905 | orchestrator | 2026-04-09 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:08.830523 | orchestrator | 2026-04-09 01:04:08 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:08.831498 | orchestrator | 2026-04-09 01:04:08 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:08.831926 | orchestrator | 2026-04-09 01:04:08 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:08.832668 | orchestrator | 2026-04-09 01:04:08 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:08.832734 | orchestrator | 2026-04-09 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:11.855026 | orchestrator | 2026-04-09 01:04:11 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:11.855123 | orchestrator | 2026-04-09 01:04:11 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:11.855914 | orchestrator | 2026-04-09 01:04:11 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:11.856368 | orchestrator | 2026-04-09 01:04:11 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:11.856504 | orchestrator | 2026-04-09 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:14.874964 | orchestrator | 2026-04-09 01:04:14 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:14.875034 | orchestrator | 2026-04-09 01:04:14 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:14.875231 | orchestrator | 2026-04-09 01:04:14 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:14.875891 | orchestrator | 2026-04-09 01:04:14 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:14.875906 | orchestrator | 2026-04-09 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:17.898891 | orchestrator | 2026-04-09 01:04:17 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:17.898975 | orchestrator | 2026-04-09 01:04:17 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:17.899607 | orchestrator | 2026-04-09 01:04:17 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:17.900640 | orchestrator | 2026-04-09 01:04:17 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:17.900692 | orchestrator | 2026-04-09 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:20.928421 | orchestrator | 2026-04-09 01:04:20 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:20.930063 | orchestrator | 2026-04-09 01:04:20 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:20.931966 | orchestrator | 2026-04-09 01:04:20 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:20.935147 | orchestrator | 2026-04-09 01:04:20 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:20.935403 | orchestrator | 2026-04-09 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:23.958820 | orchestrator | 2026-04-09 01:04:23 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:23.959418 | orchestrator | 2026-04-09 01:04:23 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:23.960214 | orchestrator | 2026-04-09 01:04:23 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:23.960979 | orchestrator | 2026-04-09 01:04:23 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:23.961013 | orchestrator | 2026-04-09 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:26.978565 | orchestrator | 2026-04-09 01:04:26 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:26.978835 | orchestrator | 2026-04-09 01:04:26 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:26.979557 | orchestrator | 2026-04-09 01:04:26 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:26.980221 | orchestrator | 2026-04-09 01:04:26 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:26.980302 | orchestrator | 2026-04-09 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:30.004255 | orchestrator | 2026-04-09 01:04:30 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:30.005447 | orchestrator | 2026-04-09 01:04:30 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:30.006199 | orchestrator | 2026-04-09 01:04:30 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:30.006806 | orchestrator | 2026-04-09 01:04:30 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:30.006834 | orchestrator | 2026-04-09 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:33.030854 | orchestrator | 2026-04-09 01:04:33 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:33.031235 | orchestrator | 2026-04-09 01:04:33 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:33.031858 | orchestrator | 2026-04-09 01:04:33 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:33.032633 | orchestrator | 2026-04-09 01:04:33 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:33.032669 | orchestrator | 2026-04-09 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:36.060381 | orchestrator | 2026-04-09 01:04:36 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:36.061374 | orchestrator | 2026-04-09 01:04:36 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:36.061627 | orchestrator | 2026-04-09 01:04:36 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:36.063343 | orchestrator | 2026-04-09 01:04:36 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:36.063383 | orchestrator | 2026-04-09 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:39.095894 | orchestrator | 2026-04-09 01:04:39 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:39.096462 | orchestrator | 2026-04-09 01:04:39 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:39.097493 | orchestrator | 2026-04-09 01:04:39 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:39.098439 | orchestrator | 2026-04-09 01:04:39 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:39.098494 | orchestrator | 2026-04-09 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:42.129085 | orchestrator | 2026-04-09 01:04:42 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:42.129554 | orchestrator | 2026-04-09 01:04:42 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:42.130181 | orchestrator | 2026-04-09 01:04:42 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:42.131578 | orchestrator | 2026-04-09 01:04:42 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:42.131642 | orchestrator | 2026-04-09 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:45.231707 | orchestrator | 2026-04-09 01:04:45 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:45.232022 | orchestrator | 2026-04-09 01:04:45 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:45.232930 | orchestrator | 2026-04-09 01:04:45 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:45.233666 | orchestrator | 2026-04-09 01:04:45 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:45.233697 | orchestrator | 2026-04-09 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:48.262476 | orchestrator | 2026-04-09 01:04:48 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:48.262529 | orchestrator | 2026-04-09 01:04:48 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:48.262561 | orchestrator | 2026-04-09 01:04:48 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:48.263179 | orchestrator | 2026-04-09 01:04:48 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:48.263268 | orchestrator | 2026-04-09 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:51.325210 | orchestrator | 2026-04-09 01:04:51 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:51.325602 | orchestrator | 2026-04-09 01:04:51 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:51.326186 | orchestrator | 2026-04-09 01:04:51 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:51.327456 | orchestrator | 2026-04-09 01:04:51 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state STARTED 2026-04-09 01:04:51.327471 | orchestrator | 2026-04-09 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:54.365486 | orchestrator | 2026-04-09 01:04:54 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:54.365878 | orchestrator | 2026-04-09 01:04:54 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:54.367464 | orchestrator | 2026-04-09 01:04:54 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:54.369172 | orchestrator | 2026-04-09 01:04:54.369207 | orchestrator | 2026-04-09 01:04:54 | INFO  | Task 61059790-a2ff-4d12-985f-c5606dc9367d is in state SUCCESS 2026-04-09 01:04:54.370423 | orchestrator | 2026-04-09 01:04:54.370448 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:04:54.370456 | orchestrator | 2026-04-09 01:04:54.370463 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:04:54.370470 | orchestrator | Thursday 09 April 2026 01:02:58 +0000 (0:00:00.314) 0:00:00.314 ******** 2026-04-09 01:04:54.370477 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:04:54.370485 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:04:54.370492 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:04:54.370499 | orchestrator | 2026-04-09 01:04:54.370506 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:04:54.370513 | orchestrator | Thursday 09 April 2026 01:02:58 +0000 (0:00:00.247) 0:00:00.561 ******** 2026-04-09 01:04:54.370519 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-09 01:04:54.370526 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-09 01:04:54.370533 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-09 01:04:54.370561 | orchestrator | 2026-04-09 01:04:54.370568 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-09 01:04:54.370575 | orchestrator | 2026-04-09 01:04:54.370582 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 01:04:54.370589 | orchestrator | Thursday 09 April 2026 01:02:58 +0000 (0:00:00.249) 0:00:00.810 ******** 2026-04-09 01:04:54.370596 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:04:54.370603 | orchestrator | 2026-04-09 01:04:54.370610 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-09 01:04:54.370617 | orchestrator | Thursday 09 April 2026 01:02:59 +0000 (0:00:00.658) 0:00:01.469 ******** 2026-04-09 01:04:54.370624 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-09 01:04:54.370631 | orchestrator | 2026-04-09 01:04:54.370638 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-09 01:04:54.370645 | orchestrator | Thursday 09 April 2026 01:03:03 +0000 (0:00:03.775) 0:00:05.245 ******** 2026-04-09 01:04:54.370652 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-09 01:04:54.370659 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-09 01:04:54.370665 | orchestrator | 2026-04-09 01:04:54.370672 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-09 01:04:54.370679 | orchestrator | Thursday 09 April 2026 01:03:10 +0000 (0:00:07.216) 0:00:12.461 ******** 2026-04-09 01:04:54.370685 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:04:54.370693 | orchestrator | 2026-04-09 01:04:54.370700 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-09 01:04:54.370707 | orchestrator | Thursday 09 April 2026 01:03:13 +0000 (0:00:03.174) 0:00:15.636 ******** 2026-04-09 01:04:54.370714 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-09 01:04:54.370721 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:04:54.370728 | orchestrator | 2026-04-09 01:04:54.370735 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-09 01:04:54.370742 | orchestrator | Thursday 09 April 2026 01:03:17 +0000 (0:00:04.038) 0:00:19.674 ******** 2026-04-09 01:04:54.370749 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:04:54.370756 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-09 01:04:54.370762 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-09 01:04:54.370768 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-09 01:04:54.370775 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-09 01:04:54.370782 | orchestrator | 2026-04-09 01:04:54.370789 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-09 01:04:54.370841 | orchestrator | Thursday 09 April 2026 01:03:34 +0000 (0:00:16.932) 0:00:36.607 ******** 2026-04-09 01:04:54.370849 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-09 01:04:54.370856 | orchestrator | 2026-04-09 01:04:54.370863 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-09 01:04:54.370871 | orchestrator | Thursday 09 April 2026 01:03:38 +0000 (0:00:04.251) 0:00:40.858 ******** 2026-04-09 01:04:54.370881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.370911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.370921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.370929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.370937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.370944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.370962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.370969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.370976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.370983 | orchestrator | 2026-04-09 01:04:54.370990 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-09 01:04:54.370997 | orchestrator | Thursday 09 April 2026 01:03:41 +0000 (0:00:03.151) 0:00:44.010 ******** 2026-04-09 01:04:54.371004 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-09 01:04:54.371011 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-09 01:04:54.371019 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-09 01:04:54.371026 | orchestrator | 2026-04-09 01:04:54.371033 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-09 01:04:54.371040 | orchestrator | Thursday 09 April 2026 01:03:44 +0000 (0:00:02.121) 0:00:46.132 ******** 2026-04-09 01:04:54.371047 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:54.371055 | orchestrator | 2026-04-09 01:04:54.371063 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-09 01:04:54.371070 | orchestrator | Thursday 09 April 2026 01:03:44 +0000 (0:00:00.283) 0:00:46.418 ******** 2026-04-09 01:04:54.371078 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:54.371086 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:54.371093 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:54.371100 | orchestrator | 2026-04-09 01:04:54.371107 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 01:04:54.371115 | orchestrator | Thursday 09 April 2026 01:03:45 +0000 (0:00:00.702) 0:00:47.121 ******** 2026-04-09 01:04:54.371122 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:04:54.371129 | orchestrator | 2026-04-09 01:04:54.371136 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-09 01:04:54.371144 | orchestrator | Thursday 09 April 2026 01:03:45 +0000 (0:00:00.834) 0:00:47.955 ******** 2026-04-09 01:04:54.371151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.371170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.371179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.371187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371255 | orchestrator | 2026-04-09 01:04:54.371262 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-09 01:04:54.371269 | orchestrator | Thursday 09 April 2026 01:03:49 +0000 (0:00:03.327) 0:00:51.283 ******** 2026-04-09 01:04:54.371275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 01:04:54.371282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371300 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:54.371312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 01:04:54.371319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371332 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:54.371339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 01:04:54.371350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371364 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:54.371370 | orchestrator | 2026-04-09 01:04:54.371377 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-09 01:04:54.371384 | orchestrator | Thursday 09 April 2026 01:03:50 +0000 (0:00:01.424) 0:00:52.708 ******** 2026-04-09 01:04:54.371397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 01:04:54.371405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371422 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:54.371430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 01:04:54.371437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371454 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:54.371466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 01:04:54.371473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371492 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:54.371499 | orchestrator | 2026-04-09 01:04:54.371505 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-09 01:04:54.371511 | orchestrator | Thursday 09 April 2026 01:03:52 +0000 (0:00:01.531) 0:00:54.240 ******** 2026-04-09 01:04:54.371518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.371668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.371678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.371685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371734 | orchestrator | 2026-04-09 01:04:54.371741 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-09 01:04:54.371749 | orchestrator | Thursday 09 April 2026 01:03:55 +0000 (0:00:03.588) 0:00:57.828 ******** 2026-04-09 01:04:54.371755 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:04:54.371762 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:54.371768 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:04:54.371775 | orchestrator | 2026-04-09 01:04:54.371782 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-09 01:04:54.371793 | orchestrator | Thursday 09 April 2026 01:03:58 +0000 (0:00:02.400) 0:01:00.229 ******** 2026-04-09 01:04:54.371800 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:04:54.371806 | orchestrator | 2026-04-09 01:04:54.371813 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-09 01:04:54.371820 | orchestrator | Thursday 09 April 2026 01:03:59 +0000 (0:00:01.326) 0:01:01.556 ******** 2026-04-09 01:04:54.371827 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:54.371834 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:54.371840 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:54.371848 | orchestrator | 2026-04-09 01:04:54.371855 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-09 01:04:54.371861 | orchestrator | Thursday 09 April 2026 01:04:00 +0000 (0:00:01.246) 0:01:02.803 ******** 2026-04-09 01:04:54.371868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.371875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.371887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.371894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.371936 | orchestrator | 2026-04-09 01:04:54.371946 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-09 01:04:54.371953 | orchestrator | Thursday 09 April 2026 01:04:10 +0000 (0:00:09.589) 0:01:12.392 ******** 2026-04-09 01:04:54.371963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 01:04:54.371973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.371988 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:54.371995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 01:04:54.372002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.372013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.372023 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:54.372030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-09 01:04:54.372036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.372042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:04:54.372050 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:54.372056 | orchestrator | 2026-04-09 01:04:54.372062 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-09 01:04:54.372069 | orchestrator | Thursday 09 April 2026 01:04:11 +0000 (0:00:00.813) 0:01:13.206 ******** 2026-04-09 01:04:54.372076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.372090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.372101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-09 01:04:54.372108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.372115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.372122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.372129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.372145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.372152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:04:54.372158 | orchestrator | 2026-04-09 01:04:54.372165 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 01:04:54.372171 | orchestrator | Thursday 09 April 2026 01:04:14 +0000 (0:00:03.323) 0:01:16.529 ******** 2026-04-09 01:04:54.372177 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:04:54.372184 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:04:54.372191 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:04:54.372197 | orchestrator | 2026-04-09 01:04:54.372204 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-09 01:04:54.372211 | orchestrator | Thursday 09 April 2026 01:04:14 +0000 (0:00:00.510) 0:01:17.040 ******** 2026-04-09 01:04:54.372218 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:54.372242 | orchestrator | 2026-04-09 01:04:54.372250 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-09 01:04:54.372257 | orchestrator | Thursday 09 April 2026 01:04:17 +0000 (0:00:02.577) 0:01:19.618 ******** 2026-04-09 01:04:54.372263 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:54.372270 | orchestrator | 2026-04-09 01:04:54.372277 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-09 01:04:54.372284 | orchestrator | Thursday 09 April 2026 01:04:20 +0000 (0:00:02.648) 0:01:22.267 ******** 2026-04-09 01:04:54.372291 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:54.372297 | orchestrator | 2026-04-09 01:04:54.372304 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 01:04:54.372311 | orchestrator | Thursday 09 April 2026 01:04:32 +0000 (0:00:12.225) 0:01:34.493 ******** 2026-04-09 01:04:54.372318 | orchestrator | 2026-04-09 01:04:54.372325 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 01:04:54.372333 | orchestrator | Thursday 09 April 2026 01:04:32 +0000 (0:00:00.190) 0:01:34.683 ******** 2026-04-09 01:04:54.372340 | orchestrator | 2026-04-09 01:04:54.372347 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-09 01:04:54.372355 | orchestrator | Thursday 09 April 2026 01:04:32 +0000 (0:00:00.050) 0:01:34.733 ******** 2026-04-09 01:04:54.372361 | orchestrator | 2026-04-09 01:04:54.372368 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-09 01:04:54.372375 | orchestrator | Thursday 09 April 2026 01:04:32 +0000 (0:00:00.051) 0:01:34.785 ******** 2026-04-09 01:04:54.372383 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:54.372390 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:04:54.372397 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:04:54.372405 | orchestrator | 2026-04-09 01:04:54.372412 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-09 01:04:54.372425 | orchestrator | Thursday 09 April 2026 01:04:43 +0000 (0:00:10.619) 0:01:45.404 ******** 2026-04-09 01:04:54.372433 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:54.372440 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:04:54.372447 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:04:54.372453 | orchestrator | 2026-04-09 01:04:54.372460 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-09 01:04:54.372467 | orchestrator | Thursday 09 April 2026 01:04:48 +0000 (0:00:05.459) 0:01:50.864 ******** 2026-04-09 01:04:54.372474 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:04:54.372481 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:04:54.372488 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:04:54.372495 | orchestrator | 2026-04-09 01:04:54.372502 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:04:54.372509 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:04:54.372517 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 01:04:54.372524 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 01:04:54.372531 | orchestrator | 2026-04-09 01:04:54.372538 | orchestrator | 2026-04-09 01:04:54.372544 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:04:54.372551 | orchestrator | Thursday 09 April 2026 01:04:54 +0000 (0:00:05.390) 0:01:56.254 ******** 2026-04-09 01:04:54.372557 | orchestrator | =============================================================================== 2026-04-09 01:04:54.372567 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.93s 2026-04-09 01:04:54.372577 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.23s 2026-04-09 01:04:54.372584 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.62s 2026-04-09 01:04:54.372591 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.59s 2026-04-09 01:04:54.372598 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.22s 2026-04-09 01:04:54.372605 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.46s 2026-04-09 01:04:54.372611 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.39s 2026-04-09 01:04:54.372618 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.25s 2026-04-09 01:04:54.372625 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.04s 2026-04-09 01:04:54.372632 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.78s 2026-04-09 01:04:54.372639 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.59s 2026-04-09 01:04:54.372646 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.33s 2026-04-09 01:04:54.372652 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.33s 2026-04-09 01:04:54.372659 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.17s 2026-04-09 01:04:54.372666 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.15s 2026-04-09 01:04:54.372673 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.65s 2026-04-09 01:04:54.372680 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.58s 2026-04-09 01:04:54.372687 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.40s 2026-04-09 01:04:54.372694 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.12s 2026-04-09 01:04:54.372701 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.53s 2026-04-09 01:04:54.372708 | orchestrator | 2026-04-09 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:57.414377 | orchestrator | 2026-04-09 01:04:57 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:04:57.416379 | orchestrator | 2026-04-09 01:04:57 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:04:57.420005 | orchestrator | 2026-04-09 01:04:57 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:04:57.422159 | orchestrator | 2026-04-09 01:04:57 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:04:57.422521 | orchestrator | 2026-04-09 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:00.457128 | orchestrator | 2026-04-09 01:05:00 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:00.458100 | orchestrator | 2026-04-09 01:05:00 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:00.459440 | orchestrator | 2026-04-09 01:05:00 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:00.461468 | orchestrator | 2026-04-09 01:05:00 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:00.462414 | orchestrator | 2026-04-09 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:03.506997 | orchestrator | 2026-04-09 01:05:03 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:03.509081 | orchestrator | 2026-04-09 01:05:03 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:03.510567 | orchestrator | 2026-04-09 01:05:03 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:03.512336 | orchestrator | 2026-04-09 01:05:03 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:03.512697 | orchestrator | 2026-04-09 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:06.554748 | orchestrator | 2026-04-09 01:05:06 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:06.556033 | orchestrator | 2026-04-09 01:05:06 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:06.557580 | orchestrator | 2026-04-09 01:05:06 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:06.559338 | orchestrator | 2026-04-09 01:05:06 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:06.559383 | orchestrator | 2026-04-09 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:09.621482 | orchestrator | 2026-04-09 01:05:09 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:09.622172 | orchestrator | 2026-04-09 01:05:09 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:09.624159 | orchestrator | 2026-04-09 01:05:09 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:09.625088 | orchestrator | 2026-04-09 01:05:09 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:09.625320 | orchestrator | 2026-04-09 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:12.671613 | orchestrator | 2026-04-09 01:05:12 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:12.672344 | orchestrator | 2026-04-09 01:05:12 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:12.673178 | orchestrator | 2026-04-09 01:05:12 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:12.673999 | orchestrator | 2026-04-09 01:05:12 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:12.674190 | orchestrator | 2026-04-09 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:15.706743 | orchestrator | 2026-04-09 01:05:15 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:15.707046 | orchestrator | 2026-04-09 01:05:15 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:15.708984 | orchestrator | 2026-04-09 01:05:15 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:15.709632 | orchestrator | 2026-04-09 01:05:15 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:15.709683 | orchestrator | 2026-04-09 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:18.746157 | orchestrator | 2026-04-09 01:05:18 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:18.746537 | orchestrator | 2026-04-09 01:05:18 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:18.747434 | orchestrator | 2026-04-09 01:05:18 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:18.748766 | orchestrator | 2026-04-09 01:05:18 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:18.748793 | orchestrator | 2026-04-09 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:21.801404 | orchestrator | 2026-04-09 01:05:21 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:21.804572 | orchestrator | 2026-04-09 01:05:21 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:21.807408 | orchestrator | 2026-04-09 01:05:21 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:21.808784 | orchestrator | 2026-04-09 01:05:21 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:21.808927 | orchestrator | 2026-04-09 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:24.853637 | orchestrator | 2026-04-09 01:05:24 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:24.854070 | orchestrator | 2026-04-09 01:05:24 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:24.855277 | orchestrator | 2026-04-09 01:05:24 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:24.856051 | orchestrator | 2026-04-09 01:05:24 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:24.856179 | orchestrator | 2026-04-09 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:27.893736 | orchestrator | 2026-04-09 01:05:27 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:27.895293 | orchestrator | 2026-04-09 01:05:27 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:27.898320 | orchestrator | 2026-04-09 01:05:27 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:27.900802 | orchestrator | 2026-04-09 01:05:27 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:27.900867 | orchestrator | 2026-04-09 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:30.951420 | orchestrator | 2026-04-09 01:05:30 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:30.951663 | orchestrator | 2026-04-09 01:05:30 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:30.952889 | orchestrator | 2026-04-09 01:05:30 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:30.953796 | orchestrator | 2026-04-09 01:05:30 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:30.954294 | orchestrator | 2026-04-09 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:33.992832 | orchestrator | 2026-04-09 01:05:33 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:33.994555 | orchestrator | 2026-04-09 01:05:33 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:33.997127 | orchestrator | 2026-04-09 01:05:34 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:34.000555 | orchestrator | 2026-04-09 01:05:34 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:34.001255 | orchestrator | 2026-04-09 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:37.035682 | orchestrator | 2026-04-09 01:05:37 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:37.038262 | orchestrator | 2026-04-09 01:05:37 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:37.041082 | orchestrator | 2026-04-09 01:05:37 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:37.043459 | orchestrator | 2026-04-09 01:05:37 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:37.043502 | orchestrator | 2026-04-09 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:40.098570 | orchestrator | 2026-04-09 01:05:40 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:40.100071 | orchestrator | 2026-04-09 01:05:40 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:40.101949 | orchestrator | 2026-04-09 01:05:40 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:40.103825 | orchestrator | 2026-04-09 01:05:40 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:40.105529 | orchestrator | 2026-04-09 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:43.133553 | orchestrator | 2026-04-09 01:05:43 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:43.133646 | orchestrator | 2026-04-09 01:05:43 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:43.133658 | orchestrator | 2026-04-09 01:05:43 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:43.133664 | orchestrator | 2026-04-09 01:05:43 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:43.133671 | orchestrator | 2026-04-09 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:46.155352 | orchestrator | 2026-04-09 01:05:46 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:46.157455 | orchestrator | 2026-04-09 01:05:46 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:46.160511 | orchestrator | 2026-04-09 01:05:46 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:46.162049 | orchestrator | 2026-04-09 01:05:46 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:46.162090 | orchestrator | 2026-04-09 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:49.193399 | orchestrator | 2026-04-09 01:05:49 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:49.193772 | orchestrator | 2026-04-09 01:05:49 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:49.194725 | orchestrator | 2026-04-09 01:05:49 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:49.196791 | orchestrator | 2026-04-09 01:05:49 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:49.196839 | orchestrator | 2026-04-09 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:52.247503 | orchestrator | 2026-04-09 01:05:52 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:52.248339 | orchestrator | 2026-04-09 01:05:52 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:52.249419 | orchestrator | 2026-04-09 01:05:52 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state STARTED 2026-04-09 01:05:52.250433 | orchestrator | 2026-04-09 01:05:52 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:52.250474 | orchestrator | 2026-04-09 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:55.293598 | orchestrator | 2026-04-09 01:05:55 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:05:55.297356 | orchestrator | 2026-04-09 01:05:55 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:55.301172 | orchestrator | 2026-04-09 01:05:55 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:55.303957 | orchestrator | 2026-04-09 01:05:55 | INFO  | Task 8207565b-76ad-4589-a5f0-721f5da300cc is in state SUCCESS 2026-04-09 01:05:55.309057 | orchestrator | 2026-04-09 01:05:55.309273 | orchestrator | 2026-04-09 01:05:55.309333 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:05:55.309340 | orchestrator | 2026-04-09 01:05:55.309345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:05:55.309353 | orchestrator | Thursday 09 April 2026 01:02:58 +0000 (0:00:00.317) 0:00:00.317 ******** 2026-04-09 01:05:55.309473 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:05:55.309482 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:05:55.309487 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:05:55.309493 | orchestrator | 2026-04-09 01:05:55.309498 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:05:55.309504 | orchestrator | Thursday 09 April 2026 01:02:59 +0000 (0:00:00.257) 0:00:00.575 ******** 2026-04-09 01:05:55.309509 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-09 01:05:55.309515 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-09 01:05:55.309520 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-09 01:05:55.309526 | orchestrator | 2026-04-09 01:05:55.309531 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-09 01:05:55.309536 | orchestrator | 2026-04-09 01:05:55.309541 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 01:05:55.309546 | orchestrator | Thursday 09 April 2026 01:02:59 +0000 (0:00:00.252) 0:00:00.827 ******** 2026-04-09 01:05:55.309552 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:05:55.309558 | orchestrator | 2026-04-09 01:05:55.309564 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-09 01:05:55.309569 | orchestrator | Thursday 09 April 2026 01:02:59 +0000 (0:00:00.542) 0:00:01.369 ******** 2026-04-09 01:05:55.309574 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-09 01:05:55.309579 | orchestrator | 2026-04-09 01:05:55.309601 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-09 01:05:55.309607 | orchestrator | Thursday 09 April 2026 01:03:03 +0000 (0:00:03.646) 0:00:05.015 ******** 2026-04-09 01:05:55.309612 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-09 01:05:55.309617 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-09 01:05:55.309622 | orchestrator | 2026-04-09 01:05:55.309626 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-09 01:05:55.309631 | orchestrator | Thursday 09 April 2026 01:03:10 +0000 (0:00:07.321) 0:00:12.337 ******** 2026-04-09 01:05:55.309636 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:05:55.309641 | orchestrator | 2026-04-09 01:05:55.309646 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-09 01:05:55.309651 | orchestrator | Thursday 09 April 2026 01:03:14 +0000 (0:00:03.263) 0:00:15.600 ******** 2026-04-09 01:05:55.309657 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-09 01:05:55.309662 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:05:55.309668 | orchestrator | 2026-04-09 01:05:55.309673 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-09 01:05:55.309688 | orchestrator | Thursday 09 April 2026 01:03:18 +0000 (0:00:04.161) 0:00:19.761 ******** 2026-04-09 01:05:55.309699 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:05:55.309737 | orchestrator | 2026-04-09 01:05:55.309742 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-09 01:05:55.309748 | orchestrator | Thursday 09 April 2026 01:03:21 +0000 (0:00:03.719) 0:00:23.481 ******** 2026-04-09 01:05:55.309753 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-09 01:05:55.309758 | orchestrator | 2026-04-09 01:05:55.309762 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-09 01:05:55.309767 | orchestrator | Thursday 09 April 2026 01:03:26 +0000 (0:00:04.228) 0:00:27.710 ******** 2026-04-09 01:05:55.309783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.309848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.309871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.309891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.309996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310045 | orchestrator | 2026-04-09 01:05:55.310052 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-09 01:05:55.310058 | orchestrator | Thursday 09 April 2026 01:03:29 +0000 (0:00:03.642) 0:00:31.352 ******** 2026-04-09 01:05:55.310064 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:05:55.310070 | orchestrator | 2026-04-09 01:05:55.310076 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-09 01:05:55.310081 | orchestrator | Thursday 09 April 2026 01:03:30 +0000 (0:00:00.257) 0:00:31.610 ******** 2026-04-09 01:05:55.310087 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:05:55.310092 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:05:55.310098 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:05:55.310103 | orchestrator | 2026-04-09 01:05:55.310108 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 01:05:55.310114 | orchestrator | Thursday 09 April 2026 01:03:30 +0000 (0:00:00.536) 0:00:32.147 ******** 2026-04-09 01:05:55.310120 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:05:55.310125 | orchestrator | 2026-04-09 01:05:55.310131 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-09 01:05:55.310136 | orchestrator | Thursday 09 April 2026 01:03:31 +0000 (0:00:00.589) 0:00:32.736 ******** 2026-04-09 01:05:55.310392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.310424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.310432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.310438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.310599 | orchestrator | 2026-04-09 01:05:55.310605 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-09 01:05:55.310611 | orchestrator | Thursday 09 April 2026 01:03:37 +0000 (0:00:06.084) 0:00:38.821 ******** 2026-04-09 01:05:55.310619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.310633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.310639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:05:55.310645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:05:55.310682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.310688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.310693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.310705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.310711 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:05:55.311639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311718 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:05:55.311725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.311752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:05:55.311768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311800 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:05:55.311811 | orchestrator | 2026-04-09 01:05:55.311818 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-09 01:05:55.311824 | orchestrator | Thursday 09 April 2026 01:03:38 +0000 (0:00:01.260) 0:00:40.081 ******** 2026-04-09 01:05:55.311831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.311844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:05:55.311855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311881 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:05:55.311886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.311899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:05:55.311908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311934 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:05:55.311939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.311957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:05:55.311963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.311992 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:05:55.311998 | orchestrator | 2026-04-09 01:05:55.312008 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-09 01:05:55.312014 | orchestrator | Thursday 09 April 2026 01:03:40 +0000 (0:00:02.023) 0:00:42.105 ******** 2026-04-09 01:05:55.312020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.312030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.312041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.312048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312179 | orchestrator | 2026-04-09 01:05:55.312185 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-09 01:05:55.312191 | orchestrator | Thursday 09 April 2026 01:03:47 +0000 (0:00:06.684) 0:00:48.789 ******** 2026-04-09 01:05:55.312197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.312203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.312213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.312224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312351 | orchestrator | 2026-04-09 01:05:55.312357 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-09 01:05:55.312364 | orchestrator | Thursday 09 April 2026 01:04:09 +0000 (0:00:21.913) 0:01:10.703 ******** 2026-04-09 01:05:55.312370 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 01:05:55.312377 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 01:05:55.312384 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-09 01:05:55.312392 | orchestrator | 2026-04-09 01:05:55.312398 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-09 01:05:55.312404 | orchestrator | Thursday 09 April 2026 01:04:14 +0000 (0:00:05.737) 0:01:16.440 ******** 2026-04-09 01:05:55.312411 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 01:05:55.312417 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 01:05:55.312424 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-09 01:05:55.312431 | orchestrator | 2026-04-09 01:05:55.312438 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-09 01:05:55.312445 | orchestrator | Thursday 09 April 2026 01:04:18 +0000 (0:00:03.736) 0:01:20.176 ******** 2026-04-09 01:05:55.312452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.312462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.312474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.312488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312611 | orchestrator | 2026-04-09 01:05:55.312618 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-09 01:05:55.312625 | orchestrator | Thursday 09 April 2026 01:04:22 +0000 (0:00:03.979) 0:01:24.156 ******** 2026-04-09 01:05:55.312631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.312638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.312648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.312663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.312899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.312924 | orchestrator | 2026-04-09 01:05:55.312929 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 01:05:55.312934 | orchestrator | Thursday 09 April 2026 01:04:25 +0000 (0:00:03.208) 0:01:27.365 ******** 2026-04-09 01:05:55.312939 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:05:55.312945 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:05:55.312952 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:05:55.312961 | orchestrator | 2026-04-09 01:05:55.312968 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-09 01:05:55.312974 | orchestrator | Thursday 09 April 2026 01:04:26 +0000 (0:00:00.368) 0:01:27.734 ******** 2026-04-09 01:05:55.312987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.312995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:05:55.313004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313038 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:05:55.313184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.313200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:05:55.313207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313245 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:05:55.313252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-09 01:05:55.313263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 01:05:55.313270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:05:55.313308 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:05:55.313314 | orchestrator | 2026-04-09 01:05:55.313321 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-09 01:05:55.313327 | orchestrator | Thursday 09 April 2026 01:04:27 +0000 (0:00:01.467) 0:01:29.201 ******** 2026-04-09 01:05:55.313334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.313345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.313357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-09 01:05:55.313364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:05:55.313479 | orchestrator | 2026-04-09 01:05:55.313485 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 01:05:55.313491 | orchestrator | Thursday 09 April 2026 01:04:33 +0000 (0:00:05.783) 0:01:34.984 ******** 2026-04-09 01:05:55.313498 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:05:55.313505 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:05:55.313511 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:05:55.313518 | orchestrator | 2026-04-09 01:05:55.313523 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-09 01:05:55.313529 | orchestrator | Thursday 09 April 2026 01:04:34 +0000 (0:00:00.659) 0:01:35.644 ******** 2026-04-09 01:05:55.313535 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-09 01:05:55.313540 | orchestrator | 2026-04-09 01:05:55.313546 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-09 01:05:55.313552 | orchestrator | Thursday 09 April 2026 01:04:36 +0000 (0:00:02.163) 0:01:37.808 ******** 2026-04-09 01:05:55.313558 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 01:05:55.313564 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-09 01:05:55.313570 | orchestrator | 2026-04-09 01:05:55.313576 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-09 01:05:55.313582 | orchestrator | Thursday 09 April 2026 01:04:38 +0000 (0:00:02.301) 0:01:40.109 ******** 2026-04-09 01:05:55.313592 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:05:55.313598 | orchestrator | 2026-04-09 01:05:55.313603 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 01:05:55.313609 | orchestrator | Thursday 09 April 2026 01:04:53 +0000 (0:00:15.063) 0:01:55.172 ******** 2026-04-09 01:05:55.313615 | orchestrator | 2026-04-09 01:05:55.313620 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 01:05:55.313626 | orchestrator | Thursday 09 April 2026 01:04:53 +0000 (0:00:00.132) 0:01:55.305 ******** 2026-04-09 01:05:55.313632 | orchestrator | 2026-04-09 01:05:55.313638 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-09 01:05:55.313643 | orchestrator | Thursday 09 April 2026 01:04:53 +0000 (0:00:00.124) 0:01:55.430 ******** 2026-04-09 01:05:55.313649 | orchestrator | 2026-04-09 01:05:55.313655 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-09 01:05:55.313665 | orchestrator | Thursday 09 April 2026 01:04:54 +0000 (0:00:00.088) 0:01:55.518 ******** 2026-04-09 01:05:55.313671 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:05:55.313677 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:05:55.313683 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:05:55.313689 | orchestrator | 2026-04-09 01:05:55.313695 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-09 01:05:55.313700 | orchestrator | Thursday 09 April 2026 01:05:06 +0000 (0:00:12.656) 0:02:08.175 ******** 2026-04-09 01:05:55.313707 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:05:55.313712 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:05:55.313718 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:05:55.313723 | orchestrator | 2026-04-09 01:05:55.313729 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-09 01:05:55.313735 | orchestrator | Thursday 09 April 2026 01:05:13 +0000 (0:00:06.598) 0:02:14.774 ******** 2026-04-09 01:05:55.313742 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:05:55.313747 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:05:55.313751 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:05:55.313756 | orchestrator | 2026-04-09 01:05:55.313760 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-09 01:05:55.313765 | orchestrator | Thursday 09 April 2026 01:05:24 +0000 (0:00:10.998) 0:02:25.772 ******** 2026-04-09 01:05:55.313769 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:05:55.313773 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:05:55.313778 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:05:55.313782 | orchestrator | 2026-04-09 01:05:55.313787 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-09 01:05:55.313793 | orchestrator | Thursday 09 April 2026 01:05:29 +0000 (0:00:05.023) 0:02:30.796 ******** 2026-04-09 01:05:55.313799 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:05:55.313806 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:05:55.313812 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:05:55.313818 | orchestrator | 2026-04-09 01:05:55.313824 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-09 01:05:55.313829 | orchestrator | Thursday 09 April 2026 01:05:39 +0000 (0:00:10.646) 0:02:41.443 ******** 2026-04-09 01:05:55.313835 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:05:55.313840 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:05:55.313846 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:05:55.313852 | orchestrator | 2026-04-09 01:05:55.313862 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-09 01:05:55.313867 | orchestrator | Thursday 09 April 2026 01:05:46 +0000 (0:00:06.544) 0:02:47.987 ******** 2026-04-09 01:05:55.313873 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:05:55.313879 | orchestrator | 2026-04-09 01:05:55.313886 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:05:55.313892 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:05:55.313904 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 01:05:55.313911 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 01:05:55.313917 | orchestrator | 2026-04-09 01:05:55.313923 | orchestrator | 2026-04-09 01:05:55.313929 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:05:55.313935 | orchestrator | Thursday 09 April 2026 01:05:53 +0000 (0:00:07.322) 0:02:55.309 ******** 2026-04-09 01:05:55.313942 | orchestrator | =============================================================================== 2026-04-09 01:05:55.313948 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.91s 2026-04-09 01:05:55.313954 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.06s 2026-04-09 01:05:55.313959 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.66s 2026-04-09 01:05:55.313965 | orchestrator | designate : Restart designate-central container ------------------------ 11.00s 2026-04-09 01:05:55.313971 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.65s 2026-04-09 01:05:55.313976 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.32s 2026-04-09 01:05:55.313982 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.32s 2026-04-09 01:05:55.313988 | orchestrator | designate : Copying over config.json files for services ----------------- 6.69s 2026-04-09 01:05:55.313994 | orchestrator | designate : Restart designate-api container ----------------------------- 6.60s 2026-04-09 01:05:55.314000 | orchestrator | designate : Restart designate-worker container -------------------------- 6.54s 2026-04-09 01:05:55.314005 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.08s 2026-04-09 01:05:55.314066 | orchestrator | designate : Check designate containers ---------------------------------- 5.78s 2026-04-09 01:05:55.314077 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.74s 2026-04-09 01:05:55.314082 | orchestrator | designate : Restart designate-producer container ------------------------ 5.02s 2026-04-09 01:05:55.314089 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.23s 2026-04-09 01:05:55.314095 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.16s 2026-04-09 01:05:55.314101 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.98s 2026-04-09 01:05:55.314107 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.74s 2026-04-09 01:05:55.314113 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.72s 2026-04-09 01:05:55.314119 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.65s 2026-04-09 01:05:55.314132 | orchestrator | 2026-04-09 01:05:55 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:55.314138 | orchestrator | 2026-04-09 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:58.341487 | orchestrator | 2026-04-09 01:05:58 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:05:58.343300 | orchestrator | 2026-04-09 01:05:58 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:05:58.344162 | orchestrator | 2026-04-09 01:05:58 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:05:58.345816 | orchestrator | 2026-04-09 01:05:58 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:05:58.345921 | orchestrator | 2026-04-09 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:01.435193 | orchestrator | 2026-04-09 01:06:01 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:01.436052 | orchestrator | 2026-04-09 01:06:01 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:01.438946 | orchestrator | 2026-04-09 01:06:01 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:01.439825 | orchestrator | 2026-04-09 01:06:01 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:06:01.439859 | orchestrator | 2026-04-09 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:04.463546 | orchestrator | 2026-04-09 01:06:04 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:04.464166 | orchestrator | 2026-04-09 01:06:04 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:04.466193 | orchestrator | 2026-04-09 01:06:04 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:04.466795 | orchestrator | 2026-04-09 01:06:04 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state STARTED 2026-04-09 01:06:04.466872 | orchestrator | 2026-04-09 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:07.501161 | orchestrator | 2026-04-09 01:06:07 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:07.501639 | orchestrator | 2026-04-09 01:06:07 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:07.501942 | orchestrator | 2026-04-09 01:06:07 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:07.502332 | orchestrator | 2026-04-09 01:06:07 | INFO  | Task 583895c4-52a0-4d2e-9ae1-507eed6a0d22 is in state SUCCESS 2026-04-09 01:06:07.502411 | orchestrator | 2026-04-09 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:10.550958 | orchestrator | 2026-04-09 01:06:10 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:10.551143 | orchestrator | 2026-04-09 01:06:10 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:10.551788 | orchestrator | 2026-04-09 01:06:10 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:10.552243 | orchestrator | 2026-04-09 01:06:10 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:10.552257 | orchestrator | 2026-04-09 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:13.572064 | orchestrator | 2026-04-09 01:06:13 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:13.574060 | orchestrator | 2026-04-09 01:06:13 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:13.574717 | orchestrator | 2026-04-09 01:06:13 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:13.575187 | orchestrator | 2026-04-09 01:06:13 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:13.575306 | orchestrator | 2026-04-09 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:16.614633 | orchestrator | 2026-04-09 01:06:16 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:16.616829 | orchestrator | 2026-04-09 01:06:16 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:16.617731 | orchestrator | 2026-04-09 01:06:16 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:16.618680 | orchestrator | 2026-04-09 01:06:16 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:16.618736 | orchestrator | 2026-04-09 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:19.652898 | orchestrator | 2026-04-09 01:06:19 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:19.653429 | orchestrator | 2026-04-09 01:06:19 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:19.656034 | orchestrator | 2026-04-09 01:06:19 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:19.656080 | orchestrator | 2026-04-09 01:06:19 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:19.656092 | orchestrator | 2026-04-09 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:22.692433 | orchestrator | 2026-04-09 01:06:22 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:22.694392 | orchestrator | 2026-04-09 01:06:22 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:22.696379 | orchestrator | 2026-04-09 01:06:22 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:22.698217 | orchestrator | 2026-04-09 01:06:22 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:22.698259 | orchestrator | 2026-04-09 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:25.731122 | orchestrator | 2026-04-09 01:06:25 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:25.732780 | orchestrator | 2026-04-09 01:06:25 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:25.734544 | orchestrator | 2026-04-09 01:06:25 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:25.735794 | orchestrator | 2026-04-09 01:06:25 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:25.735929 | orchestrator | 2026-04-09 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:28.763001 | orchestrator | 2026-04-09 01:06:28 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:28.764640 | orchestrator | 2026-04-09 01:06:28 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:28.765052 | orchestrator | 2026-04-09 01:06:28 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:28.765748 | orchestrator | 2026-04-09 01:06:28 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:28.765785 | orchestrator | 2026-04-09 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:31.794437 | orchestrator | 2026-04-09 01:06:31 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:31.796872 | orchestrator | 2026-04-09 01:06:31 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:31.799064 | orchestrator | 2026-04-09 01:06:31 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:31.801381 | orchestrator | 2026-04-09 01:06:31 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:31.801601 | orchestrator | 2026-04-09 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:34.845484 | orchestrator | 2026-04-09 01:06:34 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:34.847651 | orchestrator | 2026-04-09 01:06:34 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:34.849750 | orchestrator | 2026-04-09 01:06:34 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:34.851600 | orchestrator | 2026-04-09 01:06:34 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:34.851674 | orchestrator | 2026-04-09 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:37.889581 | orchestrator | 2026-04-09 01:06:37 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:37.890956 | orchestrator | 2026-04-09 01:06:37 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:37.893782 | orchestrator | 2026-04-09 01:06:37 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:37.895834 | orchestrator | 2026-04-09 01:06:37 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:37.896004 | orchestrator | 2026-04-09 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:40.932825 | orchestrator | 2026-04-09 01:06:40 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:40.933991 | orchestrator | 2026-04-09 01:06:40 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:40.934803 | orchestrator | 2026-04-09 01:06:40 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:40.936254 | orchestrator | 2026-04-09 01:06:40 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:40.936365 | orchestrator | 2026-04-09 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:43.988706 | orchestrator | 2026-04-09 01:06:43 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:43.990783 | orchestrator | 2026-04-09 01:06:43 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:43.992551 | orchestrator | 2026-04-09 01:06:43 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:43.993649 | orchestrator | 2026-04-09 01:06:43 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:43.993901 | orchestrator | 2026-04-09 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:47.034480 | orchestrator | 2026-04-09 01:06:47 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:47.035930 | orchestrator | 2026-04-09 01:06:47 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:47.038744 | orchestrator | 2026-04-09 01:06:47 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:47.040778 | orchestrator | 2026-04-09 01:06:47 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:47.040817 | orchestrator | 2026-04-09 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:50.090906 | orchestrator | 2026-04-09 01:06:50 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:50.092876 | orchestrator | 2026-04-09 01:06:50 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:50.095534 | orchestrator | 2026-04-09 01:06:50 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:50.098171 | orchestrator | 2026-04-09 01:06:50 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:50.098226 | orchestrator | 2026-04-09 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:53.145654 | orchestrator | 2026-04-09 01:06:53 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:53.147248 | orchestrator | 2026-04-09 01:06:53 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state STARTED 2026-04-09 01:06:53.148082 | orchestrator | 2026-04-09 01:06:53 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:53.148841 | orchestrator | 2026-04-09 01:06:53 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:53.148996 | orchestrator | 2026-04-09 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:56.208170 | orchestrator | 2026-04-09 01:06:56 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state STARTED 2026-04-09 01:06:56.208759 | orchestrator | 2026-04-09 01:06:56 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:06:56.211770 | orchestrator | 2026-04-09 01:06:56 | INFO  | Task aa80dd5f-2585-4768-adfe-f1acb1e42e38 is in state SUCCESS 2026-04-09 01:06:56.213007 | orchestrator | 2026-04-09 01:06:56.213046 | orchestrator | 2026-04-09 01:06:56.213071 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-09 01:06:56.213080 | orchestrator | 2026-04-09 01:06:56.213087 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-09 01:06:56.213094 | orchestrator | Thursday 09 April 2026 01:04:58 +0000 (0:00:00.084) 0:00:00.084 ******** 2026-04-09 01:06:56.213101 | orchestrator | changed: [localhost] 2026-04-09 01:06:56.213108 | orchestrator | 2026-04-09 01:06:56.213114 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-09 01:06:56.213118 | orchestrator | Thursday 09 April 2026 01:04:59 +0000 (0:00:00.847) 0:00:00.932 ******** 2026-04-09 01:06:56.213123 | orchestrator | changed: [localhost] 2026-04-09 01:06:56.213128 | orchestrator | 2026-04-09 01:06:56.213133 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-09 01:06:56.213138 | orchestrator | Thursday 09 April 2026 01:05:57 +0000 (0:00:58.126) 0:00:59.058 ******** 2026-04-09 01:06:56.213142 | orchestrator | changed: [localhost] 2026-04-09 01:06:56.213147 | orchestrator | 2026-04-09 01:06:56.213152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:06:56.213156 | orchestrator | 2026-04-09 01:06:56.213161 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:06:56.213165 | orchestrator | Thursday 09 April 2026 01:06:04 +0000 (0:00:06.724) 0:01:05.783 ******** 2026-04-09 01:06:56.213170 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:56.213174 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:56.213179 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:56.213183 | orchestrator | 2026-04-09 01:06:56.213188 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:06:56.213192 | orchestrator | Thursday 09 April 2026 01:06:05 +0000 (0:00:00.767) 0:01:06.550 ******** 2026-04-09 01:06:56.213198 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-09 01:06:56.213207 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-09 01:06:56.213217 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-09 01:06:56.213223 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-09 01:06:56.213230 | orchestrator | 2026-04-09 01:06:56.213236 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-09 01:06:56.213243 | orchestrator | skipping: no hosts matched 2026-04-09 01:06:56.213249 | orchestrator | 2026-04-09 01:06:56.213255 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:06:56.213261 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:06:56.213318 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:06:56.213357 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:06:56.213364 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:06:56.213371 | orchestrator | 2026-04-09 01:06:56.213377 | orchestrator | 2026-04-09 01:06:56.213384 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:06:56.213391 | orchestrator | Thursday 09 April 2026 01:06:06 +0000 (0:00:01.128) 0:01:07.679 ******** 2026-04-09 01:06:56.213397 | orchestrator | =============================================================================== 2026-04-09 01:06:56.213404 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 58.13s 2026-04-09 01:06:56.213410 | orchestrator | Download ironic-agent kernel -------------------------------------------- 6.72s 2026-04-09 01:06:56.213433 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2026-04-09 01:06:56.213437 | orchestrator | Ensure the destination directory exists --------------------------------- 0.85s 2026-04-09 01:06:56.213480 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2026-04-09 01:06:56.213487 | orchestrator | 2026-04-09 01:06:56.213494 | orchestrator | 2026-04-09 01:06:56.213500 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:06:56.213506 | orchestrator | 2026-04-09 01:06:56.213513 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:06:56.213520 | orchestrator | Thursday 09 April 2026 01:02:43 +0000 (0:00:00.342) 0:00:00.342 ******** 2026-04-09 01:06:56.213526 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:56.213533 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:56.213540 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:56.213545 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:06:56.213549 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:06:56.213553 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:06:56.213559 | orchestrator | 2026-04-09 01:06:56.213565 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:06:56.213572 | orchestrator | Thursday 09 April 2026 01:02:44 +0000 (0:00:00.845) 0:00:01.187 ******** 2026-04-09 01:06:56.213578 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-09 01:06:56.213584 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-09 01:06:56.213591 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-09 01:06:56.213597 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-09 01:06:56.213603 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-09 01:06:56.213610 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-09 01:06:56.213616 | orchestrator | 2026-04-09 01:06:56.213622 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-09 01:06:56.213628 | orchestrator | 2026-04-09 01:06:56.213634 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 01:06:56.213641 | orchestrator | Thursday 09 April 2026 01:02:45 +0000 (0:00:01.398) 0:00:02.586 ******** 2026-04-09 01:06:56.213660 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:06:56.213667 | orchestrator | 2026-04-09 01:06:56.213673 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-09 01:06:56.213679 | orchestrator | Thursday 09 April 2026 01:02:47 +0000 (0:00:01.342) 0:00:03.928 ******** 2026-04-09 01:06:56.213685 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:56.213692 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:56.213698 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:56.213705 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:06:56.213711 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:06:56.213717 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:06:56.213724 | orchestrator | 2026-04-09 01:06:56.213730 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-09 01:06:56.213744 | orchestrator | Thursday 09 April 2026 01:02:48 +0000 (0:00:01.226) 0:00:05.154 ******** 2026-04-09 01:06:56.213750 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:56.213756 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:56.213762 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:56.213768 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:06:56.213774 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:06:56.213781 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:06:56.213787 | orchestrator | 2026-04-09 01:06:56.213793 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-09 01:06:56.213799 | orchestrator | Thursday 09 April 2026 01:02:49 +0000 (0:00:01.101) 0:00:06.256 ******** 2026-04-09 01:06:56.213806 | orchestrator | ok: [testbed-node-0] => { 2026-04-09 01:06:56.213812 | orchestrator |  "changed": false, 2026-04-09 01:06:56.213819 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:56.213825 | orchestrator | } 2026-04-09 01:06:56.213831 | orchestrator | ok: [testbed-node-1] => { 2026-04-09 01:06:56.213837 | orchestrator |  "changed": false, 2026-04-09 01:06:56.213844 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:56.213850 | orchestrator | } 2026-04-09 01:06:56.213855 | orchestrator | ok: [testbed-node-2] => { 2026-04-09 01:06:56.213862 | orchestrator |  "changed": false, 2026-04-09 01:06:56.213868 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:56.213875 | orchestrator | } 2026-04-09 01:06:56.213881 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 01:06:56.213887 | orchestrator |  "changed": false, 2026-04-09 01:06:56.213893 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:56.213899 | orchestrator | } 2026-04-09 01:06:56.213906 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 01:06:56.213912 | orchestrator |  "changed": false, 2026-04-09 01:06:56.213918 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:56.213924 | orchestrator | } 2026-04-09 01:06:56.213931 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 01:06:56.213954 | orchestrator |  "changed": false, 2026-04-09 01:06:56.213961 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:06:56.213968 | orchestrator | } 2026-04-09 01:06:56.213974 | orchestrator | 2026-04-09 01:06:56.213981 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-09 01:06:56.213987 | orchestrator | Thursday 09 April 2026 01:02:50 +0000 (0:00:00.514) 0:00:06.770 ******** 2026-04-09 01:06:56.213993 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.214000 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.214143 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.214151 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.214159 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.214165 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.214173 | orchestrator | 2026-04-09 01:06:56.214180 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-09 01:06:56.214187 | orchestrator | Thursday 09 April 2026 01:02:50 +0000 (0:00:00.607) 0:00:07.378 ******** 2026-04-09 01:06:56.214193 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-09 01:06:56.214200 | orchestrator | 2026-04-09 01:06:56.214206 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-09 01:06:56.214213 | orchestrator | Thursday 09 April 2026 01:02:53 +0000 (0:00:03.075) 0:00:10.454 ******** 2026-04-09 01:06:56.214220 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-09 01:06:56.214226 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-09 01:06:56.214233 | orchestrator | 2026-04-09 01:06:56.214240 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-09 01:06:56.214246 | orchestrator | Thursday 09 April 2026 01:02:59 +0000 (0:00:06.098) 0:00:16.553 ******** 2026-04-09 01:06:56.214254 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:06:56.214267 | orchestrator | 2026-04-09 01:06:56.214273 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-09 01:06:56.214280 | orchestrator | Thursday 09 April 2026 01:03:03 +0000 (0:00:03.402) 0:00:19.955 ******** 2026-04-09 01:06:56.214287 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-09 01:06:56.214294 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:06:56.214300 | orchestrator | 2026-04-09 01:06:56.214314 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-09 01:06:56.214320 | orchestrator | Thursday 09 April 2026 01:03:07 +0000 (0:00:04.182) 0:00:24.137 ******** 2026-04-09 01:06:56.214326 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:06:56.214333 | orchestrator | 2026-04-09 01:06:56.214340 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-09 01:06:56.214346 | orchestrator | Thursday 09 April 2026 01:03:10 +0000 (0:00:03.517) 0:00:27.655 ******** 2026-04-09 01:06:56.214353 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-09 01:06:56.214359 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-09 01:06:56.214365 | orchestrator | 2026-04-09 01:06:56.214371 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 01:06:56.214378 | orchestrator | Thursday 09 April 2026 01:03:18 +0000 (0:00:07.923) 0:00:35.578 ******** 2026-04-09 01:06:56.214384 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.214390 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.214407 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.214414 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.214420 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.214427 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.214433 | orchestrator | 2026-04-09 01:06:56.214440 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-09 01:06:56.214446 | orchestrator | Thursday 09 April 2026 01:03:19 +0000 (0:00:00.641) 0:00:36.220 ******** 2026-04-09 01:06:56.214452 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.214458 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.214465 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.214471 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.214478 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.214484 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.214490 | orchestrator | 2026-04-09 01:06:56.214496 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-09 01:06:56.214502 | orchestrator | Thursday 09 April 2026 01:03:21 +0000 (0:00:02.034) 0:00:38.254 ******** 2026-04-09 01:06:56.214509 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:56.214515 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:56.214522 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:56.214528 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:06:56.214534 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:06:56.214541 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:06:56.214547 | orchestrator | 2026-04-09 01:06:56.214553 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-09 01:06:56.214560 | orchestrator | Thursday 09 April 2026 01:03:22 +0000 (0:00:00.886) 0:00:39.140 ******** 2026-04-09 01:06:56.214566 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.214572 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.214578 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.214584 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.214590 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.214597 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.214603 | orchestrator | 2026-04-09 01:06:56.214610 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-09 01:06:56.214616 | orchestrator | Thursday 09 April 2026 01:03:24 +0000 (0:00:01.800) 0:00:40.941 ******** 2026-04-09 01:06:56.214625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.214638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.214646 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.214658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.214678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.214688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.214695 | orchestrator | 2026-04-09 01:06:56.214702 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-09 01:06:56.214709 | orchestrator | Thursday 09 April 2026 01:03:26 +0000 (0:00:02.263) 0:00:43.204 ******** 2026-04-09 01:06:56.214715 | orchestrator | [WARNING]: Skipped 2026-04-09 01:06:56.214722 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-09 01:06:56.214729 | orchestrator | due to this access issue: 2026-04-09 01:06:56.214735 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-09 01:06:56.214741 | orchestrator | a directory 2026-04-09 01:06:56.214747 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:06:56.214753 | orchestrator | 2026-04-09 01:06:56.214760 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 01:06:56.214766 | orchestrator | Thursday 09 April 2026 01:03:27 +0000 (0:00:00.777) 0:00:43.982 ******** 2026-04-09 01:06:56.214772 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:06:56.214780 | orchestrator | 2026-04-09 01:06:56.214786 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-09 01:06:56.214793 | orchestrator | Thursday 09 April 2026 01:03:28 +0000 (0:00:01.253) 0:00:45.236 ******** 2026-04-09 01:06:56.214799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.214811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.214822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.214829 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.214836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.214842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.214849 | orchestrator | 2026-04-09 01:06:56.214855 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-09 01:06:56.214865 | orchestrator | Thursday 09 April 2026 01:03:32 +0000 (0:00:03.706) 0:00:48.942 ******** 2026-04-09 01:06:56.214872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.214883 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.214889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.214896 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.214902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.214909 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.214916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.214922 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.214933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.214943 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.214950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.214956 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.214962 | orchestrator | 2026-04-09 01:06:56.214968 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-09 01:06:56.214975 | orchestrator | Thursday 09 April 2026 01:03:34 +0000 (0:00:02.526) 0:00:51.468 ******** 2026-04-09 01:06:56.214981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.214988 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.214994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215001 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215017 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215034 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215048 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215079 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215085 | orchestrator | 2026-04-09 01:06:56.215091 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-09 01:06:56.215098 | orchestrator | Thursday 09 April 2026 01:03:37 +0000 (0:00:02.617) 0:00:54.086 ******** 2026-04-09 01:06:56.215104 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215110 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215116 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215122 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215128 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215135 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215141 | orchestrator | 2026-04-09 01:06:56.215148 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-09 01:06:56.215154 | orchestrator | Thursday 09 April 2026 01:03:40 +0000 (0:00:03.177) 0:00:57.264 ******** 2026-04-09 01:06:56.215160 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215167 | orchestrator | 2026-04-09 01:06:56.215173 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-09 01:06:56.215179 | orchestrator | Thursday 09 April 2026 01:03:40 +0000 (0:00:00.200) 0:00:57.465 ******** 2026-04-09 01:06:56.215185 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215192 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215198 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215204 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215211 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215222 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215228 | orchestrator | 2026-04-09 01:06:56.215235 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-09 01:06:56.215242 | orchestrator | Thursday 09 April 2026 01:03:41 +0000 (0:00:00.503) 0:00:57.968 ******** 2026-04-09 01:06:56.215254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215261 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215274 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215287 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215304 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215316 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215328 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215332 | orchestrator | 2026-04-09 01:06:56.215336 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-09 01:06:56.215340 | orchestrator | Thursday 09 April 2026 01:03:43 +0000 (0:00:02.490) 0:01:00.458 ******** 2026-04-09 01:06:56.215346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.215353 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.215360 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.215371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.215383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.215390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.215395 | orchestrator | 2026-04-09 01:06:56.215398 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-09 01:06:56.215402 | orchestrator | Thursday 09 April 2026 01:03:46 +0000 (0:00:02.844) 0:01:03.303 ******** 2026-04-09 01:06:56.215406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.215416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.215422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.215426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.215430 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.215434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.215441 | orchestrator | 2026-04-09 01:06:56.215445 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-09 01:06:56.215449 | orchestrator | Thursday 09 April 2026 01:03:53 +0000 (0:00:06.769) 0:01:10.073 ******** 2026-04-09 01:06:56.215453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215457 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215467 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215475 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215488 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215495 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215503 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215507 | orchestrator | 2026-04-09 01:06:56.215511 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-09 01:06:56.215515 | orchestrator | Thursday 09 April 2026 01:03:55 +0000 (0:00:02.346) 0:01:12.420 ******** 2026-04-09 01:06:56.215519 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215522 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:56.215526 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215530 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215536 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:56.215543 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:56.215552 | orchestrator | 2026-04-09 01:06:56.215559 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-09 01:06:56.215568 | orchestrator | Thursday 09 April 2026 01:03:59 +0000 (0:00:03.752) 0:01:16.172 ******** 2026-04-09 01:06:56.215575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215581 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215598 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215612 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.215626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.215630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.215634 | orchestrator | 2026-04-09 01:06:56.215638 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-09 01:06:56.215642 | orchestrator | Thursday 09 April 2026 01:04:04 +0000 (0:00:05.096) 0:01:21.269 ******** 2026-04-09 01:06:56.215649 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215653 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215656 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215660 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215664 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215668 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215671 | orchestrator | 2026-04-09 01:06:56.215675 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-09 01:06:56.215679 | orchestrator | Thursday 09 April 2026 01:04:07 +0000 (0:00:03.452) 0:01:24.721 ******** 2026-04-09 01:06:56.215683 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215686 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215690 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215694 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215697 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215701 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215705 | orchestrator | 2026-04-09 01:06:56.215709 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-09 01:06:56.215713 | orchestrator | Thursday 09 April 2026 01:04:10 +0000 (0:00:02.925) 0:01:27.646 ******** 2026-04-09 01:06:56.215716 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215720 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215724 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215727 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215731 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215735 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215739 | orchestrator | 2026-04-09 01:06:56.215743 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-09 01:06:56.215747 | orchestrator | Thursday 09 April 2026 01:04:13 +0000 (0:00:02.922) 0:01:30.569 ******** 2026-04-09 01:06:56.215750 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215754 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215758 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215762 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215766 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215769 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215773 | orchestrator | 2026-04-09 01:06:56.215777 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-09 01:06:56.215781 | orchestrator | Thursday 09 April 2026 01:04:15 +0000 (0:00:01.957) 0:01:32.526 ******** 2026-04-09 01:06:56.215784 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215788 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215792 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215797 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215803 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215809 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215816 | orchestrator | 2026-04-09 01:06:56.215822 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-09 01:06:56.215828 | orchestrator | Thursday 09 April 2026 01:04:18 +0000 (0:00:02.647) 0:01:35.174 ******** 2026-04-09 01:06:56.215835 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215841 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215848 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215854 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215858 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215861 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215865 | orchestrator | 2026-04-09 01:06:56.215869 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-09 01:06:56.215873 | orchestrator | Thursday 09 April 2026 01:04:21 +0000 (0:00:02.585) 0:01:37.760 ******** 2026-04-09 01:06:56.215876 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:56.215880 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215887 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:56.215891 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215895 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:56.215899 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215903 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:56.215907 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215913 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:56.215917 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215921 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-09 01:06:56.215925 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215929 | orchestrator | 2026-04-09 01:06:56.215932 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-09 01:06:56.215936 | orchestrator | Thursday 09 April 2026 01:04:24 +0000 (0:00:03.117) 0:01:40.877 ******** 2026-04-09 01:06:56.215940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215945 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.215949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215952 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.215957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215961 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.215965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.215974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215979 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.215982 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.215986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.215990 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.215994 | orchestrator | 2026-04-09 01:06:56.215998 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-09 01:06:56.216002 | orchestrator | Thursday 09 April 2026 01:04:26 +0000 (0:00:02.005) 0:01:42.883 ******** 2026-04-09 01:06:56.216006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.216010 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.216022 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.216033 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.216040 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.216048 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.216137 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216141 | orchestrator | 2026-04-09 01:06:56.216145 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-09 01:06:56.216148 | orchestrator | Thursday 09 April 2026 01:04:28 +0000 (0:00:02.786) 0:01:45.669 ******** 2026-04-09 01:06:56.216152 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216156 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216160 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216164 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216167 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216171 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216175 | orchestrator | 2026-04-09 01:06:56.216179 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-09 01:06:56.216182 | orchestrator | Thursday 09 April 2026 01:04:30 +0000 (0:00:01.964) 0:01:47.634 ******** 2026-04-09 01:06:56.216186 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216190 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216194 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216197 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:06:56.216201 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:06:56.216205 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:06:56.216209 | orchestrator | 2026-04-09 01:06:56.216212 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-09 01:06:56.216216 | orchestrator | Thursday 09 April 2026 01:04:34 +0000 (0:00:03.721) 0:01:51.356 ******** 2026-04-09 01:06:56.216220 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216224 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216228 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216232 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216235 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216239 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216243 | orchestrator | 2026-04-09 01:06:56.216247 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-09 01:06:56.216251 | orchestrator | Thursday 09 April 2026 01:04:36 +0000 (0:00:02.024) 0:01:53.380 ******** 2026-04-09 01:06:56.216254 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216258 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216262 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216304 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216309 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216313 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216317 | orchestrator | 2026-04-09 01:06:56.216321 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-09 01:06:56.216325 | orchestrator | Thursday 09 April 2026 01:04:39 +0000 (0:00:02.418) 0:01:55.799 ******** 2026-04-09 01:06:56.216329 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216333 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216337 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216341 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216345 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216348 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216352 | orchestrator | 2026-04-09 01:06:56.216356 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-09 01:06:56.216360 | orchestrator | Thursday 09 April 2026 01:04:41 +0000 (0:00:02.175) 0:01:57.974 ******** 2026-04-09 01:06:56.216364 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216368 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216371 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216375 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216379 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216383 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216389 | orchestrator | 2026-04-09 01:06:56.216393 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-09 01:06:56.216397 | orchestrator | Thursday 09 April 2026 01:04:42 +0000 (0:00:01.498) 0:01:59.473 ******** 2026-04-09 01:06:56.216401 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216405 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216408 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216412 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216416 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216420 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216424 | orchestrator | 2026-04-09 01:06:56.216434 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-09 01:06:56.216447 | orchestrator | Thursday 09 April 2026 01:04:45 +0000 (0:00:02.569) 0:02:02.042 ******** 2026-04-09 01:06:56.216457 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216463 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216469 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216475 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216481 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216486 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216491 | orchestrator | 2026-04-09 01:06:56.216498 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-09 01:06:56.216503 | orchestrator | Thursday 09 April 2026 01:04:47 +0000 (0:00:01.787) 0:02:03.830 ******** 2026-04-09 01:06:56.216510 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216517 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216523 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216530 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216536 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216543 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216549 | orchestrator | 2026-04-09 01:06:56.216555 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-09 01:06:56.216559 | orchestrator | Thursday 09 April 2026 01:04:48 +0000 (0:00:01.914) 0:02:05.745 ******** 2026-04-09 01:06:56.216563 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:56.216567 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216571 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:56.216574 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216578 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:56.216582 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216586 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:56.216590 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216594 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:56.216597 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216601 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-09 01:06:56.216605 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216609 | orchestrator | 2026-04-09 01:06:56.216612 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-09 01:06:56.216616 | orchestrator | Thursday 09 April 2026 01:04:51 +0000 (0:00:02.950) 0:02:08.696 ******** 2026-04-09 01:06:56.216624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.216633 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.216641 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.216649 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-09 01:06:56.216657 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.216669 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 01:06:56.216684 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216690 | orchestrator | 2026-04-09 01:06:56.216697 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-09 01:06:56.216704 | orchestrator | Thursday 09 April 2026 01:04:54 +0000 (0:00:02.522) 0:02:11.219 ******** 2026-04-09 01:06:56.216710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.216717 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.216724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.216730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.216741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-09 01:06:56.216746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-09 01:06:56.216750 | orchestrator | 2026-04-09 01:06:56.216754 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 01:06:56.216758 | orchestrator | Thursday 09 April 2026 01:04:57 +0000 (0:00:03.296) 0:02:14.515 ******** 2026-04-09 01:06:56.216761 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:56.216765 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:56.216769 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:56.216773 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:06:56.216777 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:06:56.216780 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:06:56.216784 | orchestrator | 2026-04-09 01:06:56.216788 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-09 01:06:56.216792 | orchestrator | Thursday 09 April 2026 01:04:58 +0000 (0:00:00.615) 0:02:15.131 ******** 2026-04-09 01:06:56.216796 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:56.216799 | orchestrator | 2026-04-09 01:06:56.216803 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-09 01:06:56.216807 | orchestrator | Thursday 09 April 2026 01:05:00 +0000 (0:00:02.080) 0:02:17.211 ******** 2026-04-09 01:06:56.216810 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:56.216814 | orchestrator | 2026-04-09 01:06:56.216818 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-09 01:06:56.216822 | orchestrator | Thursday 09 April 2026 01:05:02 +0000 (0:00:02.129) 0:02:19.341 ******** 2026-04-09 01:06:56.216825 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:56.216829 | orchestrator | 2026-04-09 01:06:56.216833 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:56.216837 | orchestrator | Thursday 09 April 2026 01:05:41 +0000 (0:00:39.051) 0:02:58.392 ******** 2026-04-09 01:06:56.216840 | orchestrator | 2026-04-09 01:06:56.216846 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:56.216850 | orchestrator | Thursday 09 April 2026 01:05:41 +0000 (0:00:00.144) 0:02:58.536 ******** 2026-04-09 01:06:56.216854 | orchestrator | 2026-04-09 01:06:56.216858 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:56.216862 | orchestrator | Thursday 09 April 2026 01:05:41 +0000 (0:00:00.140) 0:02:58.677 ******** 2026-04-09 01:06:56.216865 | orchestrator | 2026-04-09 01:06:56.216869 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:56.216873 | orchestrator | Thursday 09 April 2026 01:05:42 +0000 (0:00:00.156) 0:02:58.834 ******** 2026-04-09 01:06:56.216876 | orchestrator | 2026-04-09 01:06:56.216880 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:56.216884 | orchestrator | Thursday 09 April 2026 01:05:42 +0000 (0:00:00.103) 0:02:58.938 ******** 2026-04-09 01:06:56.216887 | orchestrator | 2026-04-09 01:06:56.216891 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-09 01:06:56.216895 | orchestrator | Thursday 09 April 2026 01:05:42 +0000 (0:00:00.134) 0:02:59.073 ******** 2026-04-09 01:06:56.216899 | orchestrator | 2026-04-09 01:06:56.216903 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-09 01:06:56.216907 | orchestrator | Thursday 09 April 2026 01:05:42 +0000 (0:00:00.123) 0:02:59.196 ******** 2026-04-09 01:06:56.216910 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:56.216914 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:56.216918 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:56.216921 | orchestrator | 2026-04-09 01:06:56.216925 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-09 01:06:56.216929 | orchestrator | Thursday 09 April 2026 01:06:01 +0000 (0:00:19.372) 0:03:18.568 ******** 2026-04-09 01:06:56.216933 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:06:56.216937 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:06:56.216940 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:06:56.216944 | orchestrator | 2026-04-09 01:06:56.216948 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:06:56.216951 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 01:06:56.216956 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-09 01:06:56.216962 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-09 01:06:56.216966 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 01:06:56.216970 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 01:06:56.216974 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 01:06:56.216977 | orchestrator | 2026-04-09 01:06:56.216981 | orchestrator | 2026-04-09 01:06:56.216985 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:06:56.216989 | orchestrator | Thursday 09 April 2026 01:06:54 +0000 (0:00:52.845) 0:04:11.414 ******** 2026-04-09 01:06:56.216993 | orchestrator | =============================================================================== 2026-04-09 01:06:56.216997 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 52.85s 2026-04-09 01:06:56.217000 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.05s 2026-04-09 01:06:56.217004 | orchestrator | neutron : Restart neutron-server container ----------------------------- 19.37s 2026-04-09 01:06:56.217018 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.92s 2026-04-09 01:06:56.217022 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.77s 2026-04-09 01:06:56.217025 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.10s 2026-04-09 01:06:56.217029 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.10s 2026-04-09 01:06:56.217033 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.18s 2026-04-09 01:06:56.217037 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.75s 2026-04-09 01:06:56.217041 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.72s 2026-04-09 01:06:56.217044 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.71s 2026-04-09 01:06:56.217048 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.52s 2026-04-09 01:06:56.217065 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.45s 2026-04-09 01:06:56.217069 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.40s 2026-04-09 01:06:56.217073 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.30s 2026-04-09 01:06:56.217077 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.18s 2026-04-09 01:06:56.217081 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 3.12s 2026-04-09 01:06:56.217085 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.08s 2026-04-09 01:06:56.217088 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 2.95s 2026-04-09 01:06:56.217092 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 2.93s 2026-04-09 01:06:56.217097 | orchestrator | 2026-04-09 01:06:56 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:56.217101 | orchestrator | 2026-04-09 01:06:56 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:56.217104 | orchestrator | 2026-04-09 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:59.245932 | orchestrator | 2026-04-09 01:06:59 | INFO  | Task ecd80b24-9abf-40eb-8666-deaeb4de6aa7 is in state SUCCESS 2026-04-09 01:06:59.248249 | orchestrator | 2026-04-09 01:06:59.248317 | orchestrator | 2026-04-09 01:06:59.248329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:06:59.248340 | orchestrator | 2026-04-09 01:06:59.248350 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:06:59.248360 | orchestrator | Thursday 09 April 2026 01:05:58 +0000 (0:00:00.657) 0:00:00.658 ******** 2026-04-09 01:06:59.248370 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:06:59.248380 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:06:59.248389 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:06:59.248398 | orchestrator | 2026-04-09 01:06:59.248407 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:06:59.248417 | orchestrator | Thursday 09 April 2026 01:05:59 +0000 (0:00:00.670) 0:00:01.328 ******** 2026-04-09 01:06:59.248427 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-09 01:06:59.248438 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-09 01:06:59.248447 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-09 01:06:59.248457 | orchestrator | 2026-04-09 01:06:59.248466 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-09 01:06:59.248475 | orchestrator | 2026-04-09 01:06:59.248485 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 01:06:59.248494 | orchestrator | Thursday 09 April 2026 01:05:59 +0000 (0:00:00.338) 0:00:01.666 ******** 2026-04-09 01:06:59.248501 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:06:59.248536 | orchestrator | 2026-04-09 01:06:59.248543 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-09 01:06:59.248549 | orchestrator | Thursday 09 April 2026 01:06:00 +0000 (0:00:00.564) 0:00:02.231 ******** 2026-04-09 01:06:59.248556 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-09 01:06:59.248562 | orchestrator | 2026-04-09 01:06:59.248568 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-09 01:06:59.248575 | orchestrator | Thursday 09 April 2026 01:06:03 +0000 (0:00:03.386) 0:00:05.617 ******** 2026-04-09 01:06:59.248581 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-09 01:06:59.248588 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-09 01:06:59.248594 | orchestrator | 2026-04-09 01:06:59.248598 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-09 01:06:59.248602 | orchestrator | Thursday 09 April 2026 01:06:09 +0000 (0:00:05.927) 0:00:11.544 ******** 2026-04-09 01:06:59.248606 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:06:59.248610 | orchestrator | 2026-04-09 01:06:59.248613 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-09 01:06:59.248617 | orchestrator | Thursday 09 April 2026 01:06:12 +0000 (0:00:03.143) 0:00:14.688 ******** 2026-04-09 01:06:59.248621 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-09 01:06:59.248625 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:06:59.248628 | orchestrator | 2026-04-09 01:06:59.248632 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-09 01:06:59.248636 | orchestrator | Thursday 09 April 2026 01:06:16 +0000 (0:00:03.582) 0:00:18.271 ******** 2026-04-09 01:06:59.248640 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:06:59.248643 | orchestrator | 2026-04-09 01:06:59.248647 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-09 01:06:59.248651 | orchestrator | Thursday 09 April 2026 01:06:19 +0000 (0:00:03.102) 0:00:21.374 ******** 2026-04-09 01:06:59.248655 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-09 01:06:59.248658 | orchestrator | 2026-04-09 01:06:59.248662 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 01:06:59.248666 | orchestrator | Thursday 09 April 2026 01:06:23 +0000 (0:00:03.492) 0:00:24.866 ******** 2026-04-09 01:06:59.248670 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:59.248673 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:59.248677 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:59.248681 | orchestrator | 2026-04-09 01:06:59.248685 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-09 01:06:59.248688 | orchestrator | Thursday 09 April 2026 01:06:23 +0000 (0:00:00.405) 0:00:25.271 ******** 2026-04-09 01:06:59.248694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.248712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.248720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.248725 | orchestrator | 2026-04-09 01:06:59.248728 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-09 01:06:59.248732 | orchestrator | Thursday 09 April 2026 01:06:25 +0000 (0:00:01.841) 0:00:27.113 ******** 2026-04-09 01:06:59.248736 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:59.248740 | orchestrator | 2026-04-09 01:06:59.248744 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-09 01:06:59.248748 | orchestrator | Thursday 09 April 2026 01:06:25 +0000 (0:00:00.094) 0:00:27.207 ******** 2026-04-09 01:06:59.248752 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:59.248756 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:59.248760 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:59.248763 | orchestrator | 2026-04-09 01:06:59.248767 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 01:06:59.248771 | orchestrator | Thursday 09 April 2026 01:06:25 +0000 (0:00:00.235) 0:00:27.443 ******** 2026-04-09 01:06:59.248775 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:06:59.248778 | orchestrator | 2026-04-09 01:06:59.248782 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-09 01:06:59.248786 | orchestrator | Thursday 09 April 2026 01:06:26 +0000 (0:00:00.566) 0:00:28.010 ******** 2026-04-09 01:06:59.248790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.248799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.248805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.248810 | orchestrator | 2026-04-09 01:06:59.248815 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-09 01:06:59.248820 | orchestrator | Thursday 09 April 2026 01:06:27 +0000 (0:00:01.466) 0:00:29.476 ******** 2026-04-09 01:06:59.248827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 01:06:59.248836 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:59.248846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 01:06:59.248852 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:59.248866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 01:06:59.248873 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:59.248879 | orchestrator | 2026-04-09 01:06:59.248885 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-09 01:06:59.248891 | orchestrator | Thursday 09 April 2026 01:06:28 +0000 (0:00:00.618) 0:00:30.095 ******** 2026-04-09 01:06:59.248898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 01:06:59.248905 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:59.248912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 01:06:59.248919 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:59.248927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 01:06:59.248940 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:59.248946 | orchestrator | 2026-04-09 01:06:59.248955 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-09 01:06:59.248965 | orchestrator | Thursday 09 April 2026 01:06:29 +0000 (0:00:00.763) 0:00:30.859 ******** 2026-04-09 01:06:59.248984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.248991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.248998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.249004 | orchestrator | 2026-04-09 01:06:59.249011 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-09 01:06:59.249017 | orchestrator | Thursday 09 April 2026 01:06:30 +0000 (0:00:01.666) 0:00:32.525 ******** 2026-04-09 01:06:59.249023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.249037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.249076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.249085 | orchestrator | 2026-04-09 01:06:59.249092 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-09 01:06:59.249097 | orchestrator | Thursday 09 April 2026 01:06:32 +0000 (0:00:02.059) 0:00:34.585 ******** 2026-04-09 01:06:59.249103 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-09 01:06:59.249110 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-09 01:06:59.249116 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-09 01:06:59.249121 | orchestrator | 2026-04-09 01:06:59.249127 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-09 01:06:59.249133 | orchestrator | Thursday 09 April 2026 01:06:34 +0000 (0:00:01.215) 0:00:35.801 ******** 2026-04-09 01:06:59.249138 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:59.249144 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:59.249151 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:59.249157 | orchestrator | 2026-04-09 01:06:59.249164 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-09 01:06:59.249171 | orchestrator | Thursday 09 April 2026 01:06:35 +0000 (0:00:01.195) 0:00:36.997 ******** 2026-04-09 01:06:59.249175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 01:06:59.249183 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:06:59.249189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 01:06:59.249195 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:06:59.249208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-09 01:06:59.249213 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:06:59.249222 | orchestrator | 2026-04-09 01:06:59.249229 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-09 01:06:59.249235 | orchestrator | Thursday 09 April 2026 01:06:35 +0000 (0:00:00.568) 0:00:37.565 ******** 2026-04-09 01:06:59.249240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.249247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.249262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-09 01:06:59.249268 | orchestrator | 2026-04-09 01:06:59.249274 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-09 01:06:59.249279 | orchestrator | Thursday 09 April 2026 01:06:36 +0000 (0:00:00.919) 0:00:38.485 ******** 2026-04-09 01:06:59.249285 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:59.249290 | orchestrator | 2026-04-09 01:06:59.249296 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-09 01:06:59.249304 | orchestrator | Thursday 09 April 2026 01:06:38 +0000 (0:00:01.808) 0:00:40.294 ******** 2026-04-09 01:06:59.249310 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:59.249316 | orchestrator | 2026-04-09 01:06:59.249321 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-09 01:06:59.249327 | orchestrator | Thursday 09 April 2026 01:06:40 +0000 (0:00:01.967) 0:00:42.261 ******** 2026-04-09 01:06:59.249333 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:59.249339 | orchestrator | 2026-04-09 01:06:59.249345 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 01:06:59.249350 | orchestrator | Thursday 09 April 2026 01:06:52 +0000 (0:00:12.192) 0:00:54.454 ******** 2026-04-09 01:06:59.249356 | orchestrator | 2026-04-09 01:06:59.249362 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 01:06:59.249369 | orchestrator | Thursday 09 April 2026 01:06:52 +0000 (0:00:00.067) 0:00:54.521 ******** 2026-04-09 01:06:59.249375 | orchestrator | 2026-04-09 01:06:59.249463 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-09 01:06:59.249471 | orchestrator | Thursday 09 April 2026 01:06:52 +0000 (0:00:00.065) 0:00:54.586 ******** 2026-04-09 01:06:59.249475 | orchestrator | 2026-04-09 01:06:59.249478 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-09 01:06:59.249482 | orchestrator | Thursday 09 April 2026 01:06:52 +0000 (0:00:00.064) 0:00:54.651 ******** 2026-04-09 01:06:59.249486 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:06:59.249490 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:06:59.249493 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:06:59.249497 | orchestrator | 2026-04-09 01:06:59.249501 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:06:59.249505 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 01:06:59.249509 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 01:06:59.249513 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 01:06:59.249521 | orchestrator | 2026-04-09 01:06:59.249525 | orchestrator | 2026-04-09 01:06:59.249529 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:06:59.249533 | orchestrator | Thursday 09 April 2026 01:06:57 +0000 (0:00:05.108) 0:00:59.759 ******** 2026-04-09 01:06:59.249537 | orchestrator | =============================================================================== 2026-04-09 01:06:59.249541 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.19s 2026-04-09 01:06:59.249544 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 5.93s 2026-04-09 01:06:59.249548 | orchestrator | placement : Restart placement-api container ----------------------------- 5.11s 2026-04-09 01:06:59.249552 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.58s 2026-04-09 01:06:59.249556 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.49s 2026-04-09 01:06:59.249561 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.39s 2026-04-09 01:06:59.249568 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.14s 2026-04-09 01:06:59.249574 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.10s 2026-04-09 01:06:59.249580 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.06s 2026-04-09 01:06:59.249586 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.97s 2026-04-09 01:06:59.249592 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.84s 2026-04-09 01:06:59.249599 | orchestrator | placement : Creating placement databases -------------------------------- 1.81s 2026-04-09 01:06:59.249605 | orchestrator | placement : Copying over config.json files for services ----------------- 1.67s 2026-04-09 01:06:59.249611 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.47s 2026-04-09 01:06:59.249617 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.22s 2026-04-09 01:06:59.249624 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.20s 2026-04-09 01:06:59.249630 | orchestrator | placement : Check placement containers ---------------------------------- 0.92s 2026-04-09 01:06:59.249636 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.76s 2026-04-09 01:06:59.249642 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2026-04-09 01:06:59.249648 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.62s 2026-04-09 01:06:59.249659 | orchestrator | 2026-04-09 01:06:59 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:06:59.252777 | orchestrator | 2026-04-09 01:06:59 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:06:59.253392 | orchestrator | 2026-04-09 01:06:59 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:06:59.253413 | orchestrator | 2026-04-09 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:02.281766 | orchestrator | 2026-04-09 01:07:02 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:02.282122 | orchestrator | 2026-04-09 01:07:02 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:02.282832 | orchestrator | 2026-04-09 01:07:02 | INFO  | Task 4f0eb7aa-35e3-49c5-b948-bbfa6896f91e is in state STARTED 2026-04-09 01:07:02.284171 | orchestrator | 2026-04-09 01:07:02 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:02.284202 | orchestrator | 2026-04-09 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:05.320320 | orchestrator | 2026-04-09 01:07:05 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:05.321663 | orchestrator | 2026-04-09 01:07:05 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:05.321875 | orchestrator | 2026-04-09 01:07:05 | INFO  | Task 4f0eb7aa-35e3-49c5-b948-bbfa6896f91e is in state SUCCESS 2026-04-09 01:07:05.322628 | orchestrator | 2026-04-09 01:07:05 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:05.322653 | orchestrator | 2026-04-09 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:08.361611 | orchestrator | 2026-04-09 01:07:08 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:08.363738 | orchestrator | 2026-04-09 01:07:08 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:08.364736 | orchestrator | 2026-04-09 01:07:08 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:08.365856 | orchestrator | 2026-04-09 01:07:08 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:08.366266 | orchestrator | 2026-04-09 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:11.407066 | orchestrator | 2026-04-09 01:07:11 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:11.408676 | orchestrator | 2026-04-09 01:07:11 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:11.410108 | orchestrator | 2026-04-09 01:07:11 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:11.411597 | orchestrator | 2026-04-09 01:07:11 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:11.411632 | orchestrator | 2026-04-09 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:14.449006 | orchestrator | 2026-04-09 01:07:14 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:14.450870 | orchestrator | 2026-04-09 01:07:14 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:14.452608 | orchestrator | 2026-04-09 01:07:14 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:14.454079 | orchestrator | 2026-04-09 01:07:14 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:14.454119 | orchestrator | 2026-04-09 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:17.496051 | orchestrator | 2026-04-09 01:07:17 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:17.496708 | orchestrator | 2026-04-09 01:07:17 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:17.497620 | orchestrator | 2026-04-09 01:07:17 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:17.498699 | orchestrator | 2026-04-09 01:07:17 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:17.498730 | orchestrator | 2026-04-09 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:20.532283 | orchestrator | 2026-04-09 01:07:20 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:20.532953 | orchestrator | 2026-04-09 01:07:20 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:20.533436 | orchestrator | 2026-04-09 01:07:20 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:20.534246 | orchestrator | 2026-04-09 01:07:20 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:20.534360 | orchestrator | 2026-04-09 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:23.576562 | orchestrator | 2026-04-09 01:07:23 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:23.576639 | orchestrator | 2026-04-09 01:07:23 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:23.577601 | orchestrator | 2026-04-09 01:07:23 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:23.578448 | orchestrator | 2026-04-09 01:07:23 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:23.578491 | orchestrator | 2026-04-09 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:26.617895 | orchestrator | 2026-04-09 01:07:26 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:26.618433 | orchestrator | 2026-04-09 01:07:26 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:26.621734 | orchestrator | 2026-04-09 01:07:26 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:26.622133 | orchestrator | 2026-04-09 01:07:26 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:26.623242 | orchestrator | 2026-04-09 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:29.653070 | orchestrator | 2026-04-09 01:07:29 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:29.655593 | orchestrator | 2026-04-09 01:07:29 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:29.658372 | orchestrator | 2026-04-09 01:07:29 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:29.660123 | orchestrator | 2026-04-09 01:07:29 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:29.660175 | orchestrator | 2026-04-09 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:32.696958 | orchestrator | 2026-04-09 01:07:32 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:32.699409 | orchestrator | 2026-04-09 01:07:32 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:32.699446 | orchestrator | 2026-04-09 01:07:32 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:32.699924 | orchestrator | 2026-04-09 01:07:32 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:32.699951 | orchestrator | 2026-04-09 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:35.738242 | orchestrator | 2026-04-09 01:07:35 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:35.738982 | orchestrator | 2026-04-09 01:07:35 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:35.740063 | orchestrator | 2026-04-09 01:07:35 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:35.741066 | orchestrator | 2026-04-09 01:07:35 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:35.741088 | orchestrator | 2026-04-09 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:38.788833 | orchestrator | 2026-04-09 01:07:38 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:38.794873 | orchestrator | 2026-04-09 01:07:38 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:38.794930 | orchestrator | 2026-04-09 01:07:38 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:38.795487 | orchestrator | 2026-04-09 01:07:38 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:38.795549 | orchestrator | 2026-04-09 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:41.836070 | orchestrator | 2026-04-09 01:07:41 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:41.836406 | orchestrator | 2026-04-09 01:07:41 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:41.838314 | orchestrator | 2026-04-09 01:07:41 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:41.840384 | orchestrator | 2026-04-09 01:07:41 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:41.840422 | orchestrator | 2026-04-09 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:44.880697 | orchestrator | 2026-04-09 01:07:44 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:44.881238 | orchestrator | 2026-04-09 01:07:44 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:44.882951 | orchestrator | 2026-04-09 01:07:44 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:44.883871 | orchestrator | 2026-04-09 01:07:44 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:44.883907 | orchestrator | 2026-04-09 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:47.923563 | orchestrator | 2026-04-09 01:07:47 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:47.926102 | orchestrator | 2026-04-09 01:07:47 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:47.927880 | orchestrator | 2026-04-09 01:07:47 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:47.929440 | orchestrator | 2026-04-09 01:07:47 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:47.929979 | orchestrator | 2026-04-09 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:50.980900 | orchestrator | 2026-04-09 01:07:50 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:50.983275 | orchestrator | 2026-04-09 01:07:50 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:50.984295 | orchestrator | 2026-04-09 01:07:50 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:50.987560 | orchestrator | 2026-04-09 01:07:50 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state STARTED 2026-04-09 01:07:50.989086 | orchestrator | 2026-04-09 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:54.052781 | orchestrator | 2026-04-09 01:07:54 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:54.052862 | orchestrator | 2026-04-09 01:07:54 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:54.053434 | orchestrator | 2026-04-09 01:07:54 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:54.055085 | orchestrator | 2026-04-09 01:07:54 | INFO  | Task 3bcdd41c-1790-45b5-b03c-1402e1891c9d is in state SUCCESS 2026-04-09 01:07:54.056372 | orchestrator | 2026-04-09 01:07:54.056443 | orchestrator | 2026-04-09 01:07:54.056454 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:07:54.056462 | orchestrator | 2026-04-09 01:07:54.056469 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:07:54.056499 | orchestrator | Thursday 09 April 2026 01:07:02 +0000 (0:00:00.194) 0:00:00.194 ******** 2026-04-09 01:07:54.056505 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:54.056512 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:54.056518 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:54.056524 | orchestrator | 2026-04-09 01:07:54.056530 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:07:54.056537 | orchestrator | Thursday 09 April 2026 01:07:02 +0000 (0:00:00.436) 0:00:00.631 ******** 2026-04-09 01:07:54.056543 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-09 01:07:54.056550 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-09 01:07:54.056555 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-09 01:07:54.056561 | orchestrator | 2026-04-09 01:07:54.056568 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-09 01:07:54.056574 | orchestrator | 2026-04-09 01:07:54.056580 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-09 01:07:54.056586 | orchestrator | Thursday 09 April 2026 01:07:03 +0000 (0:00:00.523) 0:00:01.154 ******** 2026-04-09 01:07:54.056593 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:54.056598 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:54.056604 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:54.056610 | orchestrator | 2026-04-09 01:07:54.056617 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:07:54.056624 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:07:54.056633 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:07:54.056639 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:07:54.056646 | orchestrator | 2026-04-09 01:07:54.056653 | orchestrator | 2026-04-09 01:07:54.056660 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:07:54.056668 | orchestrator | Thursday 09 April 2026 01:07:04 +0000 (0:00:00.999) 0:00:02.154 ******** 2026-04-09 01:07:54.056675 | orchestrator | =============================================================================== 2026-04-09 01:07:54.056682 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.00s 2026-04-09 01:07:54.056688 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2026-04-09 01:07:54.056694 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2026-04-09 01:07:54.056701 | orchestrator | 2026-04-09 01:07:54.056708 | orchestrator | 2026-04-09 01:07:54.056715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:07:54.056721 | orchestrator | 2026-04-09 01:07:54.056728 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:07:54.056748 | orchestrator | Thursday 09 April 2026 01:06:11 +0000 (0:00:00.934) 0:00:00.934 ******** 2026-04-09 01:07:54.056754 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:54.056761 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:54.056767 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:54.056775 | orchestrator | 2026-04-09 01:07:54.056782 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:07:54.056789 | orchestrator | Thursday 09 April 2026 01:06:11 +0000 (0:00:00.330) 0:00:01.264 ******** 2026-04-09 01:07:54.056796 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-09 01:07:54.056803 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-09 01:07:54.056810 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-09 01:07:54.056818 | orchestrator | 2026-04-09 01:07:54.056824 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-09 01:07:54.056830 | orchestrator | 2026-04-09 01:07:54.056837 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 01:07:54.056851 | orchestrator | Thursday 09 April 2026 01:06:11 +0000 (0:00:00.220) 0:00:01.485 ******** 2026-04-09 01:07:54.056858 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:07:54.056865 | orchestrator | 2026-04-09 01:07:54.056871 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-09 01:07:54.056877 | orchestrator | Thursday 09 April 2026 01:06:12 +0000 (0:00:00.877) 0:00:02.362 ******** 2026-04-09 01:07:54.056884 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-09 01:07:54.056890 | orchestrator | 2026-04-09 01:07:54.056896 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-09 01:07:54.056903 | orchestrator | Thursday 09 April 2026 01:06:16 +0000 (0:00:03.554) 0:00:05.916 ******** 2026-04-09 01:07:54.056909 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-09 01:07:54.056916 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-09 01:07:54.056923 | orchestrator | 2026-04-09 01:07:54.056930 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-09 01:07:54.056936 | orchestrator | Thursday 09 April 2026 01:06:21 +0000 (0:00:05.947) 0:00:11.864 ******** 2026-04-09 01:07:54.056943 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:07:54.056950 | orchestrator | 2026-04-09 01:07:54.056979 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-09 01:07:54.056987 | orchestrator | Thursday 09 April 2026 01:06:25 +0000 (0:00:03.464) 0:00:15.329 ******** 2026-04-09 01:07:54.057011 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-09 01:07:54.057017 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:07:54.057022 | orchestrator | 2026-04-09 01:07:54.057027 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-09 01:07:54.057031 | orchestrator | Thursday 09 April 2026 01:06:29 +0000 (0:00:04.335) 0:00:19.664 ******** 2026-04-09 01:07:54.057036 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:07:54.057041 | orchestrator | 2026-04-09 01:07:54.057046 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-09 01:07:54.057050 | orchestrator | Thursday 09 April 2026 01:06:33 +0000 (0:00:03.403) 0:00:23.068 ******** 2026-04-09 01:07:54.057055 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-09 01:07:54.057060 | orchestrator | 2026-04-09 01:07:54.057065 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-09 01:07:54.057069 | orchestrator | Thursday 09 April 2026 01:06:36 +0000 (0:00:03.576) 0:00:26.645 ******** 2026-04-09 01:07:54.057074 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:54.057078 | orchestrator | 2026-04-09 01:07:54.057083 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-09 01:07:54.057088 | orchestrator | Thursday 09 April 2026 01:06:39 +0000 (0:00:02.749) 0:00:29.394 ******** 2026-04-09 01:07:54.057092 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:54.057097 | orchestrator | 2026-04-09 01:07:54.057102 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-09 01:07:54.057106 | orchestrator | Thursday 09 April 2026 01:06:42 +0000 (0:00:03.189) 0:00:32.583 ******** 2026-04-09 01:07:54.057111 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:54.057116 | orchestrator | 2026-04-09 01:07:54.057121 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-09 01:07:54.057126 | orchestrator | Thursday 09 April 2026 01:06:45 +0000 (0:00:03.213) 0:00:35.797 ******** 2026-04-09 01:07:54.057134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.057153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.057159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.057169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.057175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.057180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.057189 | orchestrator | 2026-04-09 01:07:54.057194 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-09 01:07:54.057202 | orchestrator | Thursday 09 April 2026 01:06:47 +0000 (0:00:01.515) 0:00:37.313 ******** 2026-04-09 01:07:54.057209 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:54.057215 | orchestrator | 2026-04-09 01:07:54.057221 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-09 01:07:54.057227 | orchestrator | Thursday 09 April 2026 01:06:47 +0000 (0:00:00.110) 0:00:37.423 ******** 2026-04-09 01:07:54.057233 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:54.057239 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:54.057246 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:54.057251 | orchestrator | 2026-04-09 01:07:54.057307 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-09 01:07:54.057316 | orchestrator | Thursday 09 April 2026 01:06:47 +0000 (0:00:00.230) 0:00:37.653 ******** 2026-04-09 01:07:54.057323 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:07:54.057330 | orchestrator | 2026-04-09 01:07:54.057337 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-09 01:07:54.057343 | orchestrator | Thursday 09 April 2026 01:06:48 +0000 (0:00:00.811) 0:00:38.465 ******** 2026-04-09 01:07:54.057350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.057363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.057370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.057388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.057395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.057402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.057479 | orchestrator | 2026-04-09 01:07:54.057489 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-09 01:07:54.057495 | orchestrator | Thursday 09 April 2026 01:06:50 +0000 (0:00:02.013) 0:00:40.479 ******** 2026-04-09 01:07:54.057502 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:07:54.057508 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:07:54.057514 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:07:54.057521 | orchestrator | 2026-04-09 01:07:54.057528 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 01:07:54.057539 | orchestrator | Thursday 09 April 2026 01:06:51 +0000 (0:00:00.436) 0:00:40.915 ******** 2026-04-09 01:07:54.057546 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:07:54.057553 | orchestrator | 2026-04-09 01:07:54.057559 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-09 01:07:54.057566 | orchestrator | Thursday 09 April 2026 01:06:51 +0000 (0:00:00.493) 0:00:41.408 ******** 2026-04-09 01:07:54.057578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.057585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.057595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.057602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.057615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.057626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.057633 | orchestrator | 2026-04-09 01:07:54.057639 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-09 01:07:54.057645 | orchestrator | Thursday 09 April 2026 01:06:53 +0000 (0:00:02.040) 0:00:43.448 ******** 2026-04-09 01:07:54.057655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 01:07:54.057663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 01:07:54.057670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:54.057682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:54.057695 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:54.057702 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:54.057708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 01:07:54.057715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:54.057731 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:54.057737 | orchestrator | 2026-04-09 01:07:54.057744 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-09 01:07:54.057750 | orchestrator | Thursday 09 April 2026 01:06:54 +0000 (0:00:01.053) 0:00:44.502 ******** 2026-04-09 01:07:54.057756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 01:07:54.057762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:54.057775 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:54.057788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 01:07:54.057794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:54.057800 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:54.057810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 01:07:54.057817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:54.057823 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:54.057830 | orchestrator | 2026-04-09 01:07:54.057836 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-09 01:07:54.057842 | orchestrator | Thursday 09 April 2026 01:06:55 +0000 (0:00:01.048) 0:00:45.550 ******** 2026-04-09 01:07:54.058212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.058246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.058254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.058267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.058274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.058300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.058307 | orchestrator | 2026-04-09 01:07:54.058313 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-09 01:07:54.058320 | orchestrator | Thursday 09 April 2026 01:06:58 +0000 (0:00:02.690) 0:00:48.241 ******** 2026-04-09 01:07:54.058326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.058333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.058343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.058349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.058366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.058373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.058379 | orchestrator | 2026-04-09 01:07:54.058386 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-09 01:07:54.058392 | orchestrator | Thursday 09 April 2026 01:07:04 +0000 (0:00:06.421) 0:00:54.663 ******** 2026-04-09 01:07:54.058402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 01:07:54.058409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:54.058420 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:54.058427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 01:07:54.058439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:54.058446 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:54.058452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-09 01:07:54.058462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:07:54.058468 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:54.058474 | orchestrator | 2026-04-09 01:07:54.058480 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-09 01:07:54.058486 | orchestrator | Thursday 09 April 2026 01:07:05 +0000 (0:00:00.577) 0:00:55.241 ******** 2026-04-09 01:07:54.058517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.058536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.058543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-09 01:07:54.058550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.058560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.058572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:07:54.058578 | orchestrator | 2026-04-09 01:07:54.058584 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 01:07:54.058590 | orchestrator | Thursday 09 April 2026 01:07:07 +0000 (0:00:01.960) 0:00:57.201 ******** 2026-04-09 01:07:54.058596 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:07:54.058602 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:07:54.058608 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:07:54.058614 | orchestrator | 2026-04-09 01:07:54.058619 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-09 01:07:54.058625 | orchestrator | Thursday 09 April 2026 01:07:07 +0000 (0:00:00.378) 0:00:57.579 ******** 2026-04-09 01:07:54.058632 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:54.058638 | orchestrator | 2026-04-09 01:07:54.058644 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-09 01:07:54.058650 | orchestrator | Thursday 09 April 2026 01:07:09 +0000 (0:00:01.907) 0:00:59.486 ******** 2026-04-09 01:07:54.058655 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:54.058661 | orchestrator | 2026-04-09 01:07:54.058668 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-09 01:07:54.058674 | orchestrator | Thursday 09 April 2026 01:07:11 +0000 (0:00:02.294) 0:01:01.781 ******** 2026-04-09 01:07:54.058686 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:54.058692 | orchestrator | 2026-04-09 01:07:54.058698 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 01:07:54.058704 | orchestrator | Thursday 09 April 2026 01:07:26 +0000 (0:00:14.475) 0:01:16.257 ******** 2026-04-09 01:07:54.058710 | orchestrator | 2026-04-09 01:07:54.058717 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 01:07:54.058723 | orchestrator | Thursday 09 April 2026 01:07:26 +0000 (0:00:00.169) 0:01:16.427 ******** 2026-04-09 01:07:54.058730 | orchestrator | 2026-04-09 01:07:54.058736 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-09 01:07:54.058742 | orchestrator | Thursday 09 April 2026 01:07:26 +0000 (0:00:00.061) 0:01:16.488 ******** 2026-04-09 01:07:54.058748 | orchestrator | 2026-04-09 01:07:54.058755 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-09 01:07:54.058761 | orchestrator | Thursday 09 April 2026 01:07:26 +0000 (0:00:00.062) 0:01:16.550 ******** 2026-04-09 01:07:54.058766 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:54.058773 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:07:54.058779 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:07:54.058785 | orchestrator | 2026-04-09 01:07:54.058791 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-09 01:07:54.058797 | orchestrator | Thursday 09 April 2026 01:07:42 +0000 (0:00:15.774) 0:01:32.325 ******** 2026-04-09 01:07:54.058804 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:07:54.058810 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:07:54.058816 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:07:54.058822 | orchestrator | 2026-04-09 01:07:54.058829 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:07:54.058836 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-09 01:07:54.058853 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 01:07:54.058861 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 01:07:54.058866 | orchestrator | 2026-04-09 01:07:54.058872 | orchestrator | 2026-04-09 01:07:54.058878 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:07:54.058884 | orchestrator | Thursday 09 April 2026 01:07:51 +0000 (0:00:08.757) 0:01:41.082 ******** 2026-04-09 01:07:54.058890 | orchestrator | =============================================================================== 2026-04-09 01:07:54.058899 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.77s 2026-04-09 01:07:54.058908 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.48s 2026-04-09 01:07:54.058915 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 8.76s 2026-04-09 01:07:54.058921 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.42s 2026-04-09 01:07:54.058937 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.95s 2026-04-09 01:07:54.058944 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.34s 2026-04-09 01:07:54.058950 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.58s 2026-04-09 01:07:54.059049 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.55s 2026-04-09 01:07:54.059061 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.46s 2026-04-09 01:07:54.059067 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.40s 2026-04-09 01:07:54.059074 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.21s 2026-04-09 01:07:54.059080 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.19s 2026-04-09 01:07:54.059086 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.75s 2026-04-09 01:07:54.059093 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.69s 2026-04-09 01:07:54.059099 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.29s 2026-04-09 01:07:54.059106 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.04s 2026-04-09 01:07:54.059113 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.01s 2026-04-09 01:07:54.059119 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.96s 2026-04-09 01:07:54.059126 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.91s 2026-04-09 01:07:54.059133 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.52s 2026-04-09 01:07:54.059140 | orchestrator | 2026-04-09 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:57.101307 | orchestrator | 2026-04-09 01:07:57 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:07:57.102203 | orchestrator | 2026-04-09 01:07:57 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:07:57.103912 | orchestrator | 2026-04-09 01:07:57 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:07:57.104005 | orchestrator | 2026-04-09 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:00.140209 | orchestrator | 2026-04-09 01:08:00 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:00.142436 | orchestrator | 2026-04-09 01:08:00 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:00.144213 | orchestrator | 2026-04-09 01:08:00 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:00.144273 | orchestrator | 2026-04-09 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:03.190282 | orchestrator | 2026-04-09 01:08:03 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:03.191357 | orchestrator | 2026-04-09 01:08:03 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:03.193156 | orchestrator | 2026-04-09 01:08:03 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:03.193230 | orchestrator | 2026-04-09 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:06.241895 | orchestrator | 2026-04-09 01:08:06 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:06.244521 | orchestrator | 2026-04-09 01:08:06 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:06.247138 | orchestrator | 2026-04-09 01:08:06 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:06.247197 | orchestrator | 2026-04-09 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:09.281283 | orchestrator | 2026-04-09 01:08:09 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:09.281356 | orchestrator | 2026-04-09 01:08:09 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:09.282096 | orchestrator | 2026-04-09 01:08:09 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:09.282129 | orchestrator | 2026-04-09 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:12.321792 | orchestrator | 2026-04-09 01:08:12 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:12.323432 | orchestrator | 2026-04-09 01:08:12 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:12.325249 | orchestrator | 2026-04-09 01:08:12 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:12.325306 | orchestrator | 2026-04-09 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:15.369588 | orchestrator | 2026-04-09 01:08:15 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:15.371398 | orchestrator | 2026-04-09 01:08:15 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:15.373774 | orchestrator | 2026-04-09 01:08:15 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:15.373850 | orchestrator | 2026-04-09 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:18.416639 | orchestrator | 2026-04-09 01:08:18 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:18.418127 | orchestrator | 2026-04-09 01:08:18 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:18.419497 | orchestrator | 2026-04-09 01:08:18 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:18.419532 | orchestrator | 2026-04-09 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:21.468133 | orchestrator | 2026-04-09 01:08:21 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:21.469469 | orchestrator | 2026-04-09 01:08:21 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:21.472222 | orchestrator | 2026-04-09 01:08:21 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:21.472495 | orchestrator | 2026-04-09 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:24.522439 | orchestrator | 2026-04-09 01:08:24 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:24.524223 | orchestrator | 2026-04-09 01:08:24 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:24.525808 | orchestrator | 2026-04-09 01:08:24 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:24.525866 | orchestrator | 2026-04-09 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:27.571065 | orchestrator | 2026-04-09 01:08:27 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:27.573616 | orchestrator | 2026-04-09 01:08:27 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:27.576176 | orchestrator | 2026-04-09 01:08:27 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:27.576244 | orchestrator | 2026-04-09 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:30.613558 | orchestrator | 2026-04-09 01:08:30 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:30.613629 | orchestrator | 2026-04-09 01:08:30 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:30.613816 | orchestrator | 2026-04-09 01:08:30 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:30.613993 | orchestrator | 2026-04-09 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:33.640480 | orchestrator | 2026-04-09 01:08:33 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:33.640605 | orchestrator | 2026-04-09 01:08:33 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:33.641260 | orchestrator | 2026-04-09 01:08:33 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:33.641284 | orchestrator | 2026-04-09 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:36.683435 | orchestrator | 2026-04-09 01:08:36 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:36.685058 | orchestrator | 2026-04-09 01:08:36 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:36.686570 | orchestrator | 2026-04-09 01:08:36 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:36.686601 | orchestrator | 2026-04-09 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:39.724561 | orchestrator | 2026-04-09 01:08:39 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:39.726601 | orchestrator | 2026-04-09 01:08:39 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:39.727964 | orchestrator | 2026-04-09 01:08:39 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:39.728045 | orchestrator | 2026-04-09 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:42.773209 | orchestrator | 2026-04-09 01:08:42 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:42.775969 | orchestrator | 2026-04-09 01:08:42 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:42.777517 | orchestrator | 2026-04-09 01:08:42 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:42.777559 | orchestrator | 2026-04-09 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:45.825711 | orchestrator | 2026-04-09 01:08:45 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:45.826303 | orchestrator | 2026-04-09 01:08:45 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:45.828173 | orchestrator | 2026-04-09 01:08:45 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:45.828222 | orchestrator | 2026-04-09 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:48.875325 | orchestrator | 2026-04-09 01:08:48 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state STARTED 2026-04-09 01:08:48.878373 | orchestrator | 2026-04-09 01:08:48 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:48.880169 | orchestrator | 2026-04-09 01:08:48 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:48.880223 | orchestrator | 2026-04-09 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:51.925706 | orchestrator | 2026-04-09 01:08:51 | INFO  | Task cdd434f8-ab84-4482-90c3-86cce326ed02 is in state SUCCESS 2026-04-09 01:08:51.926607 | orchestrator | 2026-04-09 01:08:51.926659 | orchestrator | 2026-04-09 01:08:51.926666 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:08:51.926671 | orchestrator | 2026-04-09 01:08:51.926675 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:08:51.926680 | orchestrator | Thursday 09 April 2026 01:06:59 +0000 (0:00:00.612) 0:00:00.612 ******** 2026-04-09 01:08:51.926684 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:08:51.926689 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:08:51.926693 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:08:51.926697 | orchestrator | 2026-04-09 01:08:51.926701 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:08:51.926705 | orchestrator | Thursday 09 April 2026 01:06:59 +0000 (0:00:00.516) 0:00:01.129 ******** 2026-04-09 01:08:51.926709 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-09 01:08:51.926714 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-09 01:08:51.926718 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-09 01:08:51.926722 | orchestrator | 2026-04-09 01:08:51.926726 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-09 01:08:51.926730 | orchestrator | 2026-04-09 01:08:51.926734 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-09 01:08:51.926738 | orchestrator | Thursday 09 April 2026 01:07:00 +0000 (0:00:00.433) 0:00:01.562 ******** 2026-04-09 01:08:51.926742 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:08:51.926747 | orchestrator | 2026-04-09 01:08:51.926751 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-09 01:08:51.926755 | orchestrator | Thursday 09 April 2026 01:07:01 +0000 (0:00:01.080) 0:00:02.642 ******** 2026-04-09 01:08:51.926762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.926769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.926803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.926808 | orchestrator | 2026-04-09 01:08:51.926812 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-09 01:08:51.926816 | orchestrator | Thursday 09 April 2026 01:07:02 +0000 (0:00:01.138) 0:00:03.781 ******** 2026-04-09 01:08:51.926820 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-09 01:08:51.926824 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-09 01:08:51.926828 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:08:51.926832 | orchestrator | 2026-04-09 01:08:51.926836 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-09 01:08:51.926840 | orchestrator | Thursday 09 April 2026 01:07:03 +0000 (0:00:00.967) 0:00:04.749 ******** 2026-04-09 01:08:51.926844 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:08:51.926848 | orchestrator | 2026-04-09 01:08:51.926852 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-09 01:08:51.926856 | orchestrator | Thursday 09 April 2026 01:07:03 +0000 (0:00:00.554) 0:00:05.303 ******** 2026-04-09 01:08:51.926930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.926936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.926940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.926949 | orchestrator | 2026-04-09 01:08:51.926953 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-09 01:08:51.926957 | orchestrator | Thursday 09 April 2026 01:07:05 +0000 (0:00:01.413) 0:00:06.717 ******** 2026-04-09 01:08:51.926965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 01:08:51.926969 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:08:51.926974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 01:08:51.926978 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:08:51.926986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 01:08:51.926990 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:08:51.926994 | orchestrator | 2026-04-09 01:08:51.926998 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-09 01:08:51.927080 | orchestrator | Thursday 09 April 2026 01:07:05 +0000 (0:00:00.314) 0:00:07.031 ******** 2026-04-09 01:08:51.927125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 01:08:51.927134 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:08:51.927141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 01:08:51.927154 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:08:51.927161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-09 01:08:51.927167 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:08:51.927173 | orchestrator | 2026-04-09 01:08:51.927180 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-09 01:08:51.927187 | orchestrator | Thursday 09 April 2026 01:07:06 +0000 (0:00:00.610) 0:00:07.642 ******** 2026-04-09 01:08:51.927200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.927207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.927222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.927231 | orchestrator | 2026-04-09 01:08:51.927239 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-09 01:08:51.927270 | orchestrator | Thursday 09 April 2026 01:07:07 +0000 (0:00:01.353) 0:00:08.995 ******** 2026-04-09 01:08:51.927277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.927290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.927301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.927308 | orchestrator | 2026-04-09 01:08:51.927315 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-09 01:08:51.927320 | orchestrator | Thursday 09 April 2026 01:07:08 +0000 (0:00:01.377) 0:00:10.373 ******** 2026-04-09 01:08:51.927327 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:08:51.927333 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:08:51.927339 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:08:51.927345 | orchestrator | 2026-04-09 01:08:51.927352 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-09 01:08:51.927359 | orchestrator | Thursday 09 April 2026 01:07:09 +0000 (0:00:00.316) 0:00:10.690 ******** 2026-04-09 01:08:51.927365 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 01:08:51.927372 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 01:08:51.927379 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 01:08:51.927385 | orchestrator | 2026-04-09 01:08:51.927392 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-09 01:08:51.927397 | orchestrator | Thursday 09 April 2026 01:07:10 +0000 (0:00:01.100) 0:00:11.791 ******** 2026-04-09 01:08:51.927404 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 01:08:51.927413 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 01:08:51.927419 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 01:08:51.927425 | orchestrator | 2026-04-09 01:08:51.927432 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-09 01:08:51.927438 | orchestrator | Thursday 09 April 2026 01:07:11 +0000 (0:00:01.114) 0:00:12.906 ******** 2026-04-09 01:08:51.927451 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:08:51.927474 | orchestrator | 2026-04-09 01:08:51.927481 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-09 01:08:51.927487 | orchestrator | Thursday 09 April 2026 01:07:12 +0000 (0:00:00.795) 0:00:13.701 ******** 2026-04-09 01:08:51.927493 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-09 01:08:51.927499 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-09 01:08:51.927505 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:08:51.927512 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:08:51.927518 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:08:51.927524 | orchestrator | 2026-04-09 01:08:51.927531 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-09 01:08:51.927537 | orchestrator | Thursday 09 April 2026 01:07:12 +0000 (0:00:00.625) 0:00:14.326 ******** 2026-04-09 01:08:51.927543 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:08:51.927550 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:08:51.927556 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:08:51.927562 | orchestrator | 2026-04-09 01:08:51.927569 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-09 01:08:51.927576 | orchestrator | Thursday 09 April 2026 01:07:13 +0000 (0:00:00.269) 0:00:14.596 ******** 2026-04-09 01:08:51.927584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1326873, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.777792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1326873, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.777792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1326873, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.777792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1326899, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7870955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1326899, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7870955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1326899, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7870955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1327159, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8442519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1327159, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8442519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1327159, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8442519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1326891, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.784509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1326891, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.784509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1326891, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.784509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1327164, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8478458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1327164, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8478458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1327164, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8478458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1326881, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7796824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1326881, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7796824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1326881, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7796824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1326911, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7931783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1326911, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7931783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1326911, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7931783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1327148, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8426325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.927780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1327148, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8426325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1327148, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8426325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1326869, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7769177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1326869, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7769177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1326869, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7769177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1326877, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.778863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1326877, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.778863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1326877, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.778863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1326896, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.78687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1326896, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.78687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1326896, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.78687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1327130, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8401213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1327130, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8401213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1327130, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8401213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1327157, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8432796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1327157, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8432796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1327157, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8432796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1326885, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7841444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1326885, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7841444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1326885, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7841444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1327144, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8416598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1327144, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8416598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1327144, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8416598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1327176, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8507562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1327176, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8507562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1327176, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8507562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1326914, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.793781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1326914, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.793781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1326914, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.793781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1326907, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.79176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1326907, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.79176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1326907, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.79176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1326904, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.791418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1326904, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.791418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1326904, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.791418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1327137, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.840553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1327137, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.840553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1327137, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.840553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1326901, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7910106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1326901, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7910106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1326901, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7910106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1327153, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8426325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1327153, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8426325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1327153, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8426325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1326883, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7806823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1326883, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7806823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1326883, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.7806823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327656, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.985128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327656, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.985128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327656, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.985128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327575, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9587436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327575, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9587436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.928697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327575, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9587436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327196, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.854649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327196, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.854649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327196, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.854649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1327597, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9663243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1327597, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9663243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1327597, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9663243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327183, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8516786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327183, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8516786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327183, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8516786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327632, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9748986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327632, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9748986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327632, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9748986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327600, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9711757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327600, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9711757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327600, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9711757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1327635, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9748986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1327635, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9748986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1327635, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9748986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327653, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9826853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327653, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9826853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327653, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9826853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1327630, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9734824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1327630, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9734824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1327630, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9734824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327588, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.960685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327588, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.960685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327588, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.960685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327574, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327574, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327574, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327583, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.960655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327583, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.960655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327583, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.960655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327570, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327570, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327570, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9536848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1327589, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.961685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1327589, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.961685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327651, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9806852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1327589, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.961685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327651, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9806852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327651, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9806852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327639, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9794357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327639, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9794357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327186, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8516834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327639, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9794357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327186, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8516834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327190, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8535953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327186, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8516834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327190, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8535953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327624, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.972355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327190, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.8535953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327624, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.972355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1327636, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9756403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327624, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.972355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1327636, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9756403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1327636, 'dev': 83, 'nlink': 1, 'atime': 1775692946.0, 'mtime': 1775692946.0, 'ctime': 1775693922.9756403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-09 01:08:51.929709 | orchestrator | 2026-04-09 01:08:51.929720 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-09 01:08:51.929728 | orchestrator | Thursday 09 April 2026 01:07:50 +0000 (0:00:37.296) 0:00:51.892 ******** 2026-04-09 01:08:51.929735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.929742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.929753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-09 01:08:51.929770 | orchestrator | 2026-04-09 01:08:51.929777 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-09 01:08:51.929790 | orchestrator | Thursday 09 April 2026 01:07:51 +0000 (0:00:01.137) 0:00:53.029 ******** 2026-04-09 01:08:51.929797 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:08:51.929803 | orchestrator | 2026-04-09 01:08:51.929809 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-09 01:08:51.929815 | orchestrator | Thursday 09 April 2026 01:07:54 +0000 (0:00:02.581) 0:00:55.611 ******** 2026-04-09 01:08:51.929821 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:08:51.929826 | orchestrator | 2026-04-09 01:08:51.929832 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 01:08:51.929838 | orchestrator | Thursday 09 April 2026 01:07:56 +0000 (0:00:02.566) 0:00:58.178 ******** 2026-04-09 01:08:51.929844 | orchestrator | 2026-04-09 01:08:51.929858 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 01:08:51.929983 | orchestrator | Thursday 09 April 2026 01:07:56 +0000 (0:00:00.069) 0:00:58.247 ******** 2026-04-09 01:08:51.929990 | orchestrator | 2026-04-09 01:08:51.929996 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-09 01:08:51.930002 | orchestrator | Thursday 09 April 2026 01:07:56 +0000 (0:00:00.063) 0:00:58.311 ******** 2026-04-09 01:08:51.930008 | orchestrator | 2026-04-09 01:08:51.930069 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-09 01:08:51.930076 | orchestrator | Thursday 09 April 2026 01:07:56 +0000 (0:00:00.070) 0:00:58.381 ******** 2026-04-09 01:08:51.930082 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:08:51.930088 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:08:51.930094 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:08:51.930100 | orchestrator | 2026-04-09 01:08:51.930106 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-09 01:08:51.930123 | orchestrator | Thursday 09 April 2026 01:08:04 +0000 (0:00:07.254) 0:01:05.635 ******** 2026-04-09 01:08:51.930129 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:08:51.930135 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:08:51.930141 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-09 01:08:51.930149 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:08:51.930155 | orchestrator | 2026-04-09 01:08:51.930160 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-09 01:08:51.930166 | orchestrator | Thursday 09 April 2026 01:08:18 +0000 (0:00:13.972) 0:01:19.608 ******** 2026-04-09 01:08:51.930172 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:08:51.930177 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:08:51.930183 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:08:51.930189 | orchestrator | 2026-04-09 01:08:51.930195 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-09 01:08:51.930201 | orchestrator | Thursday 09 April 2026 01:08:43 +0000 (0:00:25.123) 0:01:44.731 ******** 2026-04-09 01:08:51.930207 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:08:51.930213 | orchestrator | 2026-04-09 01:08:51.930219 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-09 01:08:51.930225 | orchestrator | Thursday 09 April 2026 01:08:45 +0000 (0:00:02.669) 0:01:47.400 ******** 2026-04-09 01:08:51.930231 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:08:51.930237 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:08:51.930243 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:08:51.930248 | orchestrator | 2026-04-09 01:08:51.930255 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-09 01:08:51.930262 | orchestrator | Thursday 09 April 2026 01:08:46 +0000 (0:00:00.280) 0:01:47.681 ******** 2026-04-09 01:08:51.930271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-09 01:08:51.930280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-09 01:08:51.930287 | orchestrator | 2026-04-09 01:08:51.930293 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-09 01:08:51.930300 | orchestrator | Thursday 09 April 2026 01:08:48 +0000 (0:00:02.416) 0:01:50.098 ******** 2026-04-09 01:08:51.930307 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:08:51.930313 | orchestrator | 2026-04-09 01:08:51.930319 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:08:51.930334 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:08:51.930342 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:08:51.930349 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:08:51.930356 | orchestrator | 2026-04-09 01:08:51.930362 | orchestrator | 2026-04-09 01:08:51.930373 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:08:51.930379 | orchestrator | Thursday 09 April 2026 01:08:48 +0000 (0:00:00.275) 0:01:50.373 ******** 2026-04-09 01:08:51.930385 | orchestrator | =============================================================================== 2026-04-09 01:08:51.930391 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.30s 2026-04-09 01:08:51.930398 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 25.12s 2026-04-09 01:08:51.930404 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 13.97s 2026-04-09 01:08:51.930411 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.25s 2026-04-09 01:08:51.930418 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.67s 2026-04-09 01:08:51.930425 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.58s 2026-04-09 01:08:51.930432 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.57s 2026-04-09 01:08:51.930439 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.42s 2026-04-09 01:08:51.930446 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.41s 2026-04-09 01:08:51.930453 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.38s 2026-04-09 01:08:51.930460 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.35s 2026-04-09 01:08:51.930466 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.14s 2026-04-09 01:08:51.930472 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.14s 2026-04-09 01:08:51.930478 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.12s 2026-04-09 01:08:51.930484 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.10s 2026-04-09 01:08:51.930497 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.08s 2026-04-09 01:08:51.930504 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.97s 2026-04-09 01:08:51.930509 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.80s 2026-04-09 01:08:51.930515 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.63s 2026-04-09 01:08:51.930521 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.61s 2026-04-09 01:08:51.930527 | orchestrator | 2026-04-09 01:08:51 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:51.930534 | orchestrator | 2026-04-09 01:08:51 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:51.930541 | orchestrator | 2026-04-09 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:54.975576 | orchestrator | 2026-04-09 01:08:54 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:54.977364 | orchestrator | 2026-04-09 01:08:54 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:54.977652 | orchestrator | 2026-04-09 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:58.019693 | orchestrator | 2026-04-09 01:08:58 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:08:58.022108 | orchestrator | 2026-04-09 01:08:58 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:08:58.022156 | orchestrator | 2026-04-09 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:01.063415 | orchestrator | 2026-04-09 01:09:01 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:09:01.065832 | orchestrator | 2026-04-09 01:09:01 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:01.065911 | orchestrator | 2026-04-09 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:04.107071 | orchestrator | 2026-04-09 01:09:04 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:09:04.107545 | orchestrator | 2026-04-09 01:09:04 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:04.107572 | orchestrator | 2026-04-09 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:07.162581 | orchestrator | 2026-04-09 01:09:07 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:09:07.163909 | orchestrator | 2026-04-09 01:09:07 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:07.163962 | orchestrator | 2026-04-09 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:10.203077 | orchestrator | 2026-04-09 01:09:10 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state STARTED 2026-04-09 01:09:10.205086 | orchestrator | 2026-04-09 01:09:10 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:10.205146 | orchestrator | 2026-04-09 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:13.242733 | orchestrator | 2026-04-09 01:09:13.242785 | orchestrator | 2026-04-09 01:09:13 | INFO  | Task 98ff95d7-5531-49a2-8d57-3ab7233848cf is in state SUCCESS 2026-04-09 01:09:13.243854 | orchestrator | 2026-04-09 01:09:13.243897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:09:13.243906 | orchestrator | 2026-04-09 01:09:13.243913 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-09 01:09:13.243920 | orchestrator | Thursday 09 April 2026 01:00:34 +0000 (0:00:00.410) 0:00:00.410 ******** 2026-04-09 01:09:13.243926 | orchestrator | changed: [testbed-manager] 2026-04-09 01:09:13.243931 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.243936 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:09:13.243943 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:09:13.243953 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.243959 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.243965 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.243971 | orchestrator | 2026-04-09 01:09:13.243977 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:09:13.243984 | orchestrator | Thursday 09 April 2026 01:00:35 +0000 (0:00:00.819) 0:00:01.230 ******** 2026-04-09 01:09:13.243989 | orchestrator | changed: [testbed-manager] 2026-04-09 01:09:13.243995 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.244001 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:09:13.244007 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:09:13.244013 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.244019 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.244024 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.244030 | orchestrator | 2026-04-09 01:09:13.244035 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:09:13.244042 | orchestrator | Thursday 09 April 2026 01:00:35 +0000 (0:00:00.655) 0:00:01.885 ******** 2026-04-09 01:09:13.244049 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-09 01:09:13.244084 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-09 01:09:13.244091 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-09 01:09:13.244097 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-09 01:09:13.244141 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-09 01:09:13.244150 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-09 01:09:13.244192 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-09 01:09:13.244200 | orchestrator | 2026-04-09 01:09:13.244207 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-09 01:09:13.244388 | orchestrator | 2026-04-09 01:09:13.244399 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-09 01:09:13.244406 | orchestrator | Thursday 09 April 2026 01:00:36 +0000 (0:00:00.967) 0:00:02.853 ******** 2026-04-09 01:09:13.244412 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:09:13.244419 | orchestrator | 2026-04-09 01:09:13.244425 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-09 01:09:13.244432 | orchestrator | Thursday 09 April 2026 01:00:38 +0000 (0:00:01.053) 0:00:03.907 ******** 2026-04-09 01:09:13.244439 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-09 01:09:13.244445 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-09 01:09:13.244452 | orchestrator | 2026-04-09 01:09:13.244457 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-09 01:09:13.244461 | orchestrator | Thursday 09 April 2026 01:00:43 +0000 (0:00:05.849) 0:00:09.756 ******** 2026-04-09 01:09:13.244465 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 01:09:13.244469 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 01:09:13.244472 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.244476 | orchestrator | 2026-04-09 01:09:13.244480 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-09 01:09:13.244484 | orchestrator | Thursday 09 April 2026 01:00:48 +0000 (0:00:04.755) 0:00:14.512 ******** 2026-04-09 01:09:13.244487 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.244491 | orchestrator | 2026-04-09 01:09:13.244495 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-09 01:09:13.244498 | orchestrator | Thursday 09 April 2026 01:00:49 +0000 (0:00:00.616) 0:00:15.128 ******** 2026-04-09 01:09:13.244502 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.244506 | orchestrator | 2026-04-09 01:09:13.244510 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-09 01:09:13.244513 | orchestrator | Thursday 09 April 2026 01:00:50 +0000 (0:00:01.208) 0:00:16.337 ******** 2026-04-09 01:09:13.244517 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.244521 | orchestrator | 2026-04-09 01:09:13.244524 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 01:09:13.244528 | orchestrator | Thursday 09 April 2026 01:00:53 +0000 (0:00:03.062) 0:00:19.400 ******** 2026-04-09 01:09:13.244532 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.244536 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.244539 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.244543 | orchestrator | 2026-04-09 01:09:13.244547 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-09 01:09:13.244550 | orchestrator | Thursday 09 April 2026 01:00:53 +0000 (0:00:00.367) 0:00:19.767 ******** 2026-04-09 01:09:13.244554 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:09:13.244558 | orchestrator | 2026-04-09 01:09:13.244562 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-09 01:09:13.244565 | orchestrator | Thursday 09 April 2026 01:01:28 +0000 (0:00:34.640) 0:00:54.407 ******** 2026-04-09 01:09:13.244569 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.244573 | orchestrator | 2026-04-09 01:09:13.244576 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 01:09:13.244587 | orchestrator | Thursday 09 April 2026 01:01:46 +0000 (0:00:17.771) 0:01:12.179 ******** 2026-04-09 01:09:13.244597 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:09:13.244601 | orchestrator | 2026-04-09 01:09:13.244604 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 01:09:13.244608 | orchestrator | Thursday 09 April 2026 01:01:59 +0000 (0:00:13.405) 0:01:25.585 ******** 2026-04-09 01:09:13.244619 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:09:13.244623 | orchestrator | 2026-04-09 01:09:13.244627 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-09 01:09:13.244630 | orchestrator | Thursday 09 April 2026 01:02:00 +0000 (0:00:00.635) 0:01:26.220 ******** 2026-04-09 01:09:13.244634 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.244638 | orchestrator | 2026-04-09 01:09:13.244642 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 01:09:13.244645 | orchestrator | Thursday 09 April 2026 01:02:00 +0000 (0:00:00.448) 0:01:26.668 ******** 2026-04-09 01:09:13.244649 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:09:13.244653 | orchestrator | 2026-04-09 01:09:13.244657 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-09 01:09:13.244661 | orchestrator | Thursday 09 April 2026 01:02:01 +0000 (0:00:00.608) 0:01:27.276 ******** 2026-04-09 01:09:13.244665 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:09:13.244668 | orchestrator | 2026-04-09 01:09:13.244672 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-09 01:09:13.244676 | orchestrator | Thursday 09 April 2026 01:02:21 +0000 (0:00:19.922) 0:01:47.199 ******** 2026-04-09 01:09:13.244680 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.244683 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.244687 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.244691 | orchestrator | 2026-04-09 01:09:13.244695 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-09 01:09:13.244698 | orchestrator | 2026-04-09 01:09:13.244702 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-09 01:09:13.244725 | orchestrator | Thursday 09 April 2026 01:02:21 +0000 (0:00:00.328) 0:01:47.528 ******** 2026-04-09 01:09:13.244729 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:09:13.244733 | orchestrator | 2026-04-09 01:09:13.244737 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-09 01:09:13.244740 | orchestrator | Thursday 09 April 2026 01:02:22 +0000 (0:00:01.041) 0:01:48.569 ******** 2026-04-09 01:09:13.244744 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.244748 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.244752 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.244755 | orchestrator | 2026-04-09 01:09:13.244759 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-09 01:09:13.244763 | orchestrator | Thursday 09 April 2026 01:02:25 +0000 (0:00:02.455) 0:01:51.025 ******** 2026-04-09 01:09:13.244767 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.244771 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.244774 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.244778 | orchestrator | 2026-04-09 01:09:13.244782 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-09 01:09:13.244786 | orchestrator | Thursday 09 April 2026 01:02:27 +0000 (0:00:02.058) 0:01:53.084 ******** 2026-04-09 01:09:13.244789 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.244793 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.244797 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.244801 | orchestrator | 2026-04-09 01:09:13.244805 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-09 01:09:13.244808 | orchestrator | Thursday 09 April 2026 01:02:27 +0000 (0:00:00.390) 0:01:53.474 ******** 2026-04-09 01:09:13.244815 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 01:09:13.244819 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.244823 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 01:09:13.245129 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245139 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 01:09:13.245145 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-09 01:09:13.245152 | orchestrator | 2026-04-09 01:09:13.245158 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-09 01:09:13.245164 | orchestrator | Thursday 09 April 2026 01:02:36 +0000 (0:00:08.983) 0:02:02.458 ******** 2026-04-09 01:09:13.245170 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.245177 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245183 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245189 | orchestrator | 2026-04-09 01:09:13.245196 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-09 01:09:13.245203 | orchestrator | Thursday 09 April 2026 01:02:36 +0000 (0:00:00.270) 0:02:02.729 ******** 2026-04-09 01:09:13.245209 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 01:09:13.245216 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.245222 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 01:09:13.245228 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245231 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 01:09:13.245235 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245239 | orchestrator | 2026-04-09 01:09:13.245243 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-09 01:09:13.245247 | orchestrator | Thursday 09 April 2026 01:02:37 +0000 (0:00:00.804) 0:02:03.534 ******** 2026-04-09 01:09:13.245253 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245262 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245270 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.245276 | orchestrator | 2026-04-09 01:09:13.245281 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-09 01:09:13.245287 | orchestrator | Thursday 09 April 2026 01:02:38 +0000 (0:00:00.464) 0:02:03.999 ******** 2026-04-09 01:09:13.245294 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245299 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245304 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.245310 | orchestrator | 2026-04-09 01:09:13.245330 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-09 01:09:13.245337 | orchestrator | Thursday 09 April 2026 01:02:38 +0000 (0:00:00.825) 0:02:04.824 ******** 2026-04-09 01:09:13.245343 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245349 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245378 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.245390 | orchestrator | 2026-04-09 01:09:13.245398 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-09 01:09:13.245404 | orchestrator | Thursday 09 April 2026 01:02:41 +0000 (0:00:02.152) 0:02:06.976 ******** 2026-04-09 01:09:13.245410 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245416 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245423 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:09:13.245430 | orchestrator | 2026-04-09 01:09:13.245436 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 01:09:13.245443 | orchestrator | Thursday 09 April 2026 01:03:03 +0000 (0:00:22.110) 0:02:29.087 ******** 2026-04-09 01:09:13.245449 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245455 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245461 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:09:13.245468 | orchestrator | 2026-04-09 01:09:13.245474 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 01:09:13.245480 | orchestrator | Thursday 09 April 2026 01:03:16 +0000 (0:00:13.738) 0:02:42.825 ******** 2026-04-09 01:09:13.245494 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:09:13.245500 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245507 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245513 | orchestrator | 2026-04-09 01:09:13.245520 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-09 01:09:13.245526 | orchestrator | Thursday 09 April 2026 01:03:17 +0000 (0:00:00.863) 0:02:43.689 ******** 2026-04-09 01:09:13.245532 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245538 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245542 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.245546 | orchestrator | 2026-04-09 01:09:13.245550 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-09 01:09:13.245554 | orchestrator | Thursday 09 April 2026 01:03:32 +0000 (0:00:14.948) 0:02:58.637 ******** 2026-04-09 01:09:13.245557 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.245561 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245565 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245594 | orchestrator | 2026-04-09 01:09:13.245599 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-09 01:09:13.245603 | orchestrator | Thursday 09 April 2026 01:03:34 +0000 (0:00:02.187) 0:03:00.825 ******** 2026-04-09 01:09:13.245606 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.245640 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.245644 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.245648 | orchestrator | 2026-04-09 01:09:13.245651 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-09 01:09:13.245655 | orchestrator | 2026-04-09 01:09:13.245659 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 01:09:13.245663 | orchestrator | Thursday 09 April 2026 01:03:35 +0000 (0:00:00.366) 0:03:01.192 ******** 2026-04-09 01:09:13.245667 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:09:13.245813 | orchestrator | 2026-04-09 01:09:13.245818 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-09 01:09:13.245822 | orchestrator | Thursday 09 April 2026 01:03:36 +0000 (0:00:01.064) 0:03:02.256 ******** 2026-04-09 01:09:13.245867 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-09 01:09:13.245872 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-09 01:09:13.245875 | orchestrator | 2026-04-09 01:09:13.245879 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-09 01:09:13.245883 | orchestrator | Thursday 09 April 2026 01:03:39 +0000 (0:00:03.333) 0:03:05.590 ******** 2026-04-09 01:09:13.245887 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-09 01:09:13.245892 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-09 01:09:13.245896 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-09 01:09:13.245900 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-09 01:09:13.245904 | orchestrator | 2026-04-09 01:09:13.245908 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-09 01:09:13.245911 | orchestrator | Thursday 09 April 2026 01:03:46 +0000 (0:00:07.205) 0:03:12.795 ******** 2026-04-09 01:09:13.245915 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:09:13.245919 | orchestrator | 2026-04-09 01:09:13.245923 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-09 01:09:13.245927 | orchestrator | Thursday 09 April 2026 01:03:50 +0000 (0:00:03.200) 0:03:15.996 ******** 2026-04-09 01:09:13.245930 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-09 01:09:13.245939 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:09:13.245943 | orchestrator | 2026-04-09 01:09:13.245946 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-09 01:09:13.245950 | orchestrator | Thursday 09 April 2026 01:03:54 +0000 (0:00:04.438) 0:03:20.435 ******** 2026-04-09 01:09:13.245954 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:09:13.245958 | orchestrator | 2026-04-09 01:09:13.245962 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-09 01:09:13.245965 | orchestrator | Thursday 09 April 2026 01:03:58 +0000 (0:00:03.773) 0:03:24.209 ******** 2026-04-09 01:09:13.245972 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-09 01:09:13.245976 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-09 01:09:13.245980 | orchestrator | 2026-04-09 01:09:13.245984 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-09 01:09:13.246002 | orchestrator | Thursday 09 April 2026 01:04:06 +0000 (0:00:08.451) 0:03:32.660 ******** 2026-04-09 01:09:13.246010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246152 | orchestrator | 2026-04-09 01:09:13.246156 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-09 01:09:13.246160 | orchestrator | Thursday 09 April 2026 01:04:09 +0000 (0:00:02.758) 0:03:35.419 ******** 2026-04-09 01:09:13.246164 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.246167 | orchestrator | 2026-04-09 01:09:13.246171 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-09 01:09:13.246177 | orchestrator | Thursday 09 April 2026 01:04:09 +0000 (0:00:00.239) 0:03:35.658 ******** 2026-04-09 01:09:13.246184 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.246194 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.246200 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.246206 | orchestrator | 2026-04-09 01:09:13.246211 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-09 01:09:13.246217 | orchestrator | Thursday 09 April 2026 01:04:10 +0000 (0:00:00.570) 0:03:36.229 ******** 2026-04-09 01:09:13.246223 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 01:09:13.246229 | orchestrator | 2026-04-09 01:09:13.246235 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-09 01:09:13.246241 | orchestrator | Thursday 09 April 2026 01:04:11 +0000 (0:00:01.280) 0:03:37.510 ******** 2026-04-09 01:09:13.246246 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.246252 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.246265 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.246271 | orchestrator | 2026-04-09 01:09:13.246277 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-09 01:09:13.246282 | orchestrator | Thursday 09 April 2026 01:04:12 +0000 (0:00:00.626) 0:03:38.136 ******** 2026-04-09 01:09:13.246289 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:09:13.246296 | orchestrator | 2026-04-09 01:09:13.246301 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-09 01:09:13.246307 | orchestrator | Thursday 09 April 2026 01:04:13 +0000 (0:00:01.080) 0:03:39.217 ******** 2026-04-09 01:09:13.246395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246486 | orchestrator | 2026-04-09 01:09:13.246490 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-09 01:09:13.246494 | orchestrator | Thursday 09 April 2026 01:04:15 +0000 (0:00:02.654) 0:03:41.872 ******** 2026-04-09 01:09:13.246498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 01:09:13.246502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.246510 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.246514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 01:09:13.246518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.246524 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.246540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 01:09:13.246545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.246551 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.246555 | orchestrator | 2026-04-09 01:09:13.246559 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-09 01:09:13.246563 | orchestrator | Thursday 09 April 2026 01:04:17 +0000 (0:00:01.389) 0:03:43.261 ******** 2026-04-09 01:09:13.246567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 01:09:13.246571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.246575 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.246593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 01:09:13.246598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.246604 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.246608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 01:09:13.246615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.246623 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.246632 | orchestrator | 2026-04-09 01:09:13.246638 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-09 01:09:13.246645 | orchestrator | Thursday 09 April 2026 01:04:18 +0000 (0:00:01.185) 0:03:44.446 ******** 2026-04-09 01:09:13.246673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246743 | orchestrator | 2026-04-09 01:09:13.246749 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-09 01:09:13.246755 | orchestrator | Thursday 09 April 2026 01:04:21 +0000 (0:00:03.100) 0:03:47.547 ******** 2026-04-09 01:09:13.246766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.246811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.246847 | orchestrator | 2026-04-09 01:09:13.246853 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-09 01:09:13.246859 | orchestrator | Thursday 09 April 2026 01:04:29 +0000 (0:00:08.001) 0:03:55.548 ******** 2026-04-09 01:09:13.246866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 01:09:13.246894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.246901 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.246909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 01:09:13.246922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.246927 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.246934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-09 01:09:13.246940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.246947 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.246953 | orchestrator | 2026-04-09 01:09:13.246962 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-09 01:09:13.246969 | orchestrator | Thursday 09 April 2026 01:04:30 +0000 (0:00:00.959) 0:03:56.508 ******** 2026-04-09 01:09:13.246975 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:09:13.246982 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.246988 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:09:13.246995 | orchestrator | 2026-04-09 01:09:13.247017 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-09 01:09:13.247022 | orchestrator | Thursday 09 April 2026 01:04:32 +0000 (0:00:02.125) 0:03:58.634 ******** 2026-04-09 01:09:13.247029 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.247033 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.247037 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.247040 | orchestrator | 2026-04-09 01:09:13.247044 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-09 01:09:13.247048 | orchestrator | Thursday 09 April 2026 01:04:33 +0000 (0:00:00.369) 0:03:59.003 ******** 2026-04-09 01:09:13.247052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.247056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.247074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-09 01:09:13.247083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247097 | orchestrator | 2026-04-09 01:09:13.247102 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 01:09:13.247106 | orchestrator | Thursday 09 April 2026 01:04:35 +0000 (0:00:02.337) 0:04:01.341 ******** 2026-04-09 01:09:13.247110 | orchestrator | 2026-04-09 01:09:13.247115 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 01:09:13.247119 | orchestrator | Thursday 09 April 2026 01:04:35 +0000 (0:00:00.255) 0:04:01.597 ******** 2026-04-09 01:09:13.247123 | orchestrator | 2026-04-09 01:09:13.247128 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-09 01:09:13.247133 | orchestrator | Thursday 09 April 2026 01:04:35 +0000 (0:00:00.141) 0:04:01.738 ******** 2026-04-09 01:09:13.247137 | orchestrator | 2026-04-09 01:09:13.247142 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-09 01:09:13.247146 | orchestrator | Thursday 09 April 2026 01:04:36 +0000 (0:00:00.178) 0:04:01.917 ******** 2026-04-09 01:09:13.247151 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.247155 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:09:13.247160 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:09:13.247165 | orchestrator | 2026-04-09 01:09:13.247169 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-09 01:09:13.247173 | orchestrator | Thursday 09 April 2026 01:04:50 +0000 (0:00:14.367) 0:04:16.284 ******** 2026-04-09 01:09:13.247178 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.247182 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:09:13.247187 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:09:13.247191 | orchestrator | 2026-04-09 01:09:13.247195 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-09 01:09:13.247200 | orchestrator | 2026-04-09 01:09:13.247205 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 01:09:13.247209 | orchestrator | Thursday 09 April 2026 01:04:56 +0000 (0:00:06.542) 0:04:22.827 ******** 2026-04-09 01:09:13.247214 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:09:13.247222 | orchestrator | 2026-04-09 01:09:13.247226 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 01:09:13.247231 | orchestrator | Thursday 09 April 2026 01:04:57 +0000 (0:00:01.025) 0:04:23.853 ******** 2026-04-09 01:09:13.247235 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.247240 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.247244 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.247248 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.247253 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.247257 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.247262 | orchestrator | 2026-04-09 01:09:13.247266 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-09 01:09:13.247271 | orchestrator | Thursday 09 April 2026 01:04:58 +0000 (0:00:00.704) 0:04:24.557 ******** 2026-04-09 01:09:13.247275 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.247279 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.247284 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.247290 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:09:13.247295 | orchestrator | 2026-04-09 01:09:13.247299 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 01:09:13.247316 | orchestrator | Thursday 09 April 2026 01:04:59 +0000 (0:00:00.726) 0:04:25.283 ******** 2026-04-09 01:09:13.247322 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-09 01:09:13.247326 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-09 01:09:13.247331 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-09 01:09:13.247335 | orchestrator | 2026-04-09 01:09:13.247340 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 01:09:13.247345 | orchestrator | Thursday 09 April 2026 01:05:00 +0000 (0:00:01.094) 0:04:26.378 ******** 2026-04-09 01:09:13.247349 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-09 01:09:13.247354 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-09 01:09:13.247358 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-09 01:09:13.247362 | orchestrator | 2026-04-09 01:09:13.247367 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 01:09:13.247371 | orchestrator | Thursday 09 April 2026 01:05:01 +0000 (0:00:01.071) 0:04:27.449 ******** 2026-04-09 01:09:13.247376 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-09 01:09:13.247381 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.247385 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-09 01:09:13.247390 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.247394 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-09 01:09:13.247399 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.247403 | orchestrator | 2026-04-09 01:09:13.247407 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-09 01:09:13.247412 | orchestrator | Thursday 09 April 2026 01:05:02 +0000 (0:00:00.475) 0:04:27.925 ******** 2026-04-09 01:09:13.247416 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 01:09:13.247421 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 01:09:13.247425 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.247429 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 01:09:13.247434 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 01:09:13.247438 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.247443 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 01:09:13.247447 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 01:09:13.247453 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 01:09:13.247456 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 01:09:13.247460 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.247464 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-09 01:09:13.247468 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 01:09:13.247471 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 01:09:13.247475 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-09 01:09:13.247479 | orchestrator | 2026-04-09 01:09:13.247482 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-09 01:09:13.247486 | orchestrator | Thursday 09 April 2026 01:05:04 +0000 (0:00:02.054) 0:04:29.979 ******** 2026-04-09 01:09:13.247490 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.247493 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.247497 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.247501 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.247505 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.247508 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.247512 | orchestrator | 2026-04-09 01:09:13.247516 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-09 01:09:13.247520 | orchestrator | Thursday 09 April 2026 01:05:05 +0000 (0:00:01.105) 0:04:31.085 ******** 2026-04-09 01:09:13.247523 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.247527 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.247531 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.247534 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.247538 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.247542 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.247545 | orchestrator | 2026-04-09 01:09:13.247549 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-09 01:09:13.247553 | orchestrator | Thursday 09 April 2026 01:05:07 +0000 (0:00:01.913) 0:04:32.998 ******** 2026-04-09 01:09:13.247559 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247576 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247686 | orchestrator | 2026-04-09 01:09:13.247693 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 01:09:13.247700 | orchestrator | Thursday 09 April 2026 01:05:10 +0000 (0:00:03.760) 0:04:36.759 ******** 2026-04-09 01:09:13.247707 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:09:13.247715 | orchestrator | 2026-04-09 01:09:13.247722 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-09 01:09:13.247727 | orchestrator | Thursday 09 April 2026 01:05:12 +0000 (0:00:01.170) 0:04:37.929 ******** 2026-04-09 01:09:13.247731 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247795 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.247893 | orchestrator | 2026-04-09 01:09:13.247899 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-09 01:09:13.247905 | orchestrator | Thursday 09 April 2026 01:05:16 +0000 (0:00:04.469) 0:04:42.398 ******** 2026-04-09 01:09:13.247933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.247939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.247943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.247947 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.247951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.247955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.247972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.247979 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.247983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.247987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.247991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.247995 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.247999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:09:13.248003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.248011 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.248027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:09:13.248032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.248036 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.248040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:09:13.248044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.248048 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.248052 | orchestrator | 2026-04-09 01:09:13.248056 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-09 01:09:13.248059 | orchestrator | Thursday 09 April 2026 01:05:17 +0000 (0:00:01.476) 0:04:43.875 ******** 2026-04-09 01:09:13.248063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.248068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.248087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.248092 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.248096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.248100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.248104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.248108 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.248112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.248121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.248135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.248140 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.248144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:09:13.248148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.248152 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.248156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:09:13.248160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.248166 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.248170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:09:13.248187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.248192 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.248196 | orchestrator | 2026-04-09 01:09:13.248200 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 01:09:13.248203 | orchestrator | Thursday 09 April 2026 01:05:19 +0000 (0:00:01.987) 0:04:45.862 ******** 2026-04-09 01:09:13.248207 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.248211 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.248215 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.248219 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:09:13.248223 | orchestrator | 2026-04-09 01:09:13.248227 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-09 01:09:13.248231 | orchestrator | Thursday 09 April 2026 01:05:21 +0000 (0:00:01.065) 0:04:46.928 ******** 2026-04-09 01:09:13.248234 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 01:09:13.248238 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 01:09:13.248242 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 01:09:13.248246 | orchestrator | 2026-04-09 01:09:13.248250 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-09 01:09:13.248254 | orchestrator | Thursday 09 April 2026 01:05:22 +0000 (0:00:01.058) 0:04:47.986 ******** 2026-04-09 01:09:13.248257 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 01:09:13.248261 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 01:09:13.248265 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 01:09:13.248269 | orchestrator | 2026-04-09 01:09:13.248272 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-09 01:09:13.248276 | orchestrator | Thursday 09 April 2026 01:05:23 +0000 (0:00:01.159) 0:04:49.145 ******** 2026-04-09 01:09:13.248280 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:09:13.248284 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:09:13.248288 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:09:13.248291 | orchestrator | 2026-04-09 01:09:13.248295 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-09 01:09:13.248301 | orchestrator | Thursday 09 April 2026 01:05:23 +0000 (0:00:00.519) 0:04:49.664 ******** 2026-04-09 01:09:13.248316 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:09:13.248325 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:09:13.248331 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:09:13.248338 | orchestrator | 2026-04-09 01:09:13.248344 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-09 01:09:13.248350 | orchestrator | Thursday 09 April 2026 01:05:24 +0000 (0:00:00.505) 0:04:50.170 ******** 2026-04-09 01:09:13.248357 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-09 01:09:13.248362 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-09 01:09:13.248365 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-09 01:09:13.248369 | orchestrator | 2026-04-09 01:09:13.248373 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-09 01:09:13.248377 | orchestrator | Thursday 09 April 2026 01:05:25 +0000 (0:00:01.097) 0:04:51.267 ******** 2026-04-09 01:09:13.248380 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-09 01:09:13.248384 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-09 01:09:13.248388 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-09 01:09:13.248392 | orchestrator | 2026-04-09 01:09:13.248395 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-09 01:09:13.248399 | orchestrator | Thursday 09 April 2026 01:05:26 +0000 (0:00:01.408) 0:04:52.676 ******** 2026-04-09 01:09:13.248403 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-09 01:09:13.248406 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-09 01:09:13.248410 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-09 01:09:13.248414 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-09 01:09:13.248417 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-09 01:09:13.248421 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-09 01:09:13.248425 | orchestrator | 2026-04-09 01:09:13.248428 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-09 01:09:13.248432 | orchestrator | Thursday 09 April 2026 01:05:30 +0000 (0:00:03.699) 0:04:56.375 ******** 2026-04-09 01:09:13.248436 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.248440 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.248443 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.248447 | orchestrator | 2026-04-09 01:09:13.248451 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-09 01:09:13.248454 | orchestrator | Thursday 09 April 2026 01:05:30 +0000 (0:00:00.350) 0:04:56.726 ******** 2026-04-09 01:09:13.248458 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.248462 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.248466 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.248469 | orchestrator | 2026-04-09 01:09:13.248473 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-09 01:09:13.248477 | orchestrator | Thursday 09 April 2026 01:05:31 +0000 (0:00:00.328) 0:04:57.055 ******** 2026-04-09 01:09:13.248480 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.248484 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.248488 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.248492 | orchestrator | 2026-04-09 01:09:13.248498 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-09 01:09:13.248502 | orchestrator | Thursday 09 April 2026 01:05:32 +0000 (0:00:01.637) 0:04:58.692 ******** 2026-04-09 01:09:13.248523 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-09 01:09:13.248527 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-09 01:09:13.248531 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-09 01:09:13.248539 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-09 01:09:13.248543 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-09 01:09:13.248547 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-09 01:09:13.248550 | orchestrator | 2026-04-09 01:09:13.248554 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-09 01:09:13.248558 | orchestrator | Thursday 09 April 2026 01:05:35 +0000 (0:00:02.924) 0:05:01.616 ******** 2026-04-09 01:09:13.248562 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 01:09:13.248565 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 01:09:13.248569 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 01:09:13.248573 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 01:09:13.248577 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.248580 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 01:09:13.248584 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.248588 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 01:09:13.248592 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.248595 | orchestrator | 2026-04-09 01:09:13.248599 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-09 01:09:13.248603 | orchestrator | Thursday 09 April 2026 01:05:38 +0000 (0:00:03.085) 0:05:04.702 ******** 2026-04-09 01:09:13.248607 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.248610 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.248614 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.248618 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 01:09:13.248621 | orchestrator | 2026-04-09 01:09:13.248625 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-09 01:09:13.248629 | orchestrator | Thursday 09 April 2026 01:05:41 +0000 (0:00:02.324) 0:05:07.026 ******** 2026-04-09 01:09:13.248633 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 01:09:13.248636 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 01:09:13.248640 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 01:09:13.248644 | orchestrator | 2026-04-09 01:09:13.248647 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-09 01:09:13.248651 | orchestrator | Thursday 09 April 2026 01:05:42 +0000 (0:00:01.551) 0:05:08.578 ******** 2026-04-09 01:09:13.248655 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.248659 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.248662 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.248666 | orchestrator | 2026-04-09 01:09:13.248670 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-09 01:09:13.248674 | orchestrator | Thursday 09 April 2026 01:05:43 +0000 (0:00:00.489) 0:05:09.067 ******** 2026-04-09 01:09:13.248677 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.248681 | orchestrator | 2026-04-09 01:09:13.248685 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-09 01:09:13.248688 | orchestrator | Thursday 09 April 2026 01:05:43 +0000 (0:00:00.121) 0:05:09.189 ******** 2026-04-09 01:09:13.248692 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.248696 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.248700 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.248703 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.248707 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.248711 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.248714 | orchestrator | 2026-04-09 01:09:13.248718 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-09 01:09:13.248725 | orchestrator | Thursday 09 April 2026 01:05:43 +0000 (0:00:00.599) 0:05:09.788 ******** 2026-04-09 01:09:13.248728 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 01:09:13.248732 | orchestrator | 2026-04-09 01:09:13.248736 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-09 01:09:13.248739 | orchestrator | Thursday 09 April 2026 01:05:44 +0000 (0:00:00.580) 0:05:10.369 ******** 2026-04-09 01:09:13.248743 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.248747 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.248751 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.248754 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.248758 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.248762 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.248765 | orchestrator | 2026-04-09 01:09:13.248769 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-09 01:09:13.248773 | orchestrator | Thursday 09 April 2026 01:05:44 +0000 (0:00:00.460) 0:05:10.830 ******** 2026-04-09 01:09:13.248783 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248888 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248902 | orchestrator | 2026-04-09 01:09:13.248908 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-09 01:09:13.248915 | orchestrator | Thursday 09 April 2026 01:05:49 +0000 (0:00:04.330) 0:05:15.160 ******** 2026-04-09 01:09:13.248921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.248932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.248939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.248945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.248950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.248954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.248960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.248995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249019 | orchestrator | 2026-04-09 01:09:13.249025 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-09 01:09:13.249034 | orchestrator | Thursday 09 April 2026 01:05:55 +0000 (0:00:05.931) 0:05:21.092 ******** 2026-04-09 01:09:13.249040 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.249046 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.249052 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249058 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.249068 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249074 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249080 | orchestrator | 2026-04-09 01:09:13.249086 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-09 01:09:13.249093 | orchestrator | Thursday 09 April 2026 01:05:57 +0000 (0:00:02.127) 0:05:23.219 ******** 2026-04-09 01:09:13.249098 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 01:09:13.249102 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 01:09:13.249106 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 01:09:13.249110 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 01:09:13.249114 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-09 01:09:13.249117 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 01:09:13.249121 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249125 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-09 01:09:13.249129 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 01:09:13.249139 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249143 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-09 01:09:13.249147 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249150 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 01:09:13.249154 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 01:09:13.249158 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-09 01:09:13.249162 | orchestrator | 2026-04-09 01:09:13.249165 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-09 01:09:13.249169 | orchestrator | Thursday 09 April 2026 01:06:01 +0000 (0:00:04.074) 0:05:27.293 ******** 2026-04-09 01:09:13.249173 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.249177 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.249180 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.249184 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249188 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249192 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249195 | orchestrator | 2026-04-09 01:09:13.249199 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-09 01:09:13.249203 | orchestrator | Thursday 09 April 2026 01:06:02 +0000 (0:00:00.658) 0:05:27.952 ******** 2026-04-09 01:09:13.249207 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 01:09:13.249210 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 01:09:13.249214 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 01:09:13.249218 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 01:09:13.249222 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-09 01:09:13.249226 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-09 01:09:13.249229 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 01:09:13.249233 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 01:09:13.249237 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249241 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 01:09:13.249244 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-09 01:09:13.249248 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:09:13.249252 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:09:13.249256 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 01:09:13.249259 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249263 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:09:13.249269 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-09 01:09:13.249273 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249277 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:09:13.249286 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:09:13.249290 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-09 01:09:13.249294 | orchestrator | 2026-04-09 01:09:13.249298 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-09 01:09:13.249302 | orchestrator | Thursday 09 April 2026 01:06:08 +0000 (0:00:06.558) 0:05:34.510 ******** 2026-04-09 01:09:13.249306 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 01:09:13.249309 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 01:09:13.249313 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 01:09:13.249317 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 01:09:13.249321 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 01:09:13.249324 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-09 01:09:13.249328 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 01:09:13.249332 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 01:09:13.249335 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 01:09:13.249339 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 01:09:13.249343 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 01:09:13.249347 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 01:09:13.249350 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 01:09:13.249354 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249358 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 01:09:13.249362 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249365 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-09 01:09:13.249369 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249373 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 01:09:13.249376 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 01:09:13.249380 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 01:09:13.249384 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 01:09:13.249388 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 01:09:13.249392 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 01:09:13.249395 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 01:09:13.249399 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 01:09:13.249403 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 01:09:13.249407 | orchestrator | 2026-04-09 01:09:13.249410 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-09 01:09:13.249414 | orchestrator | Thursday 09 April 2026 01:06:16 +0000 (0:00:07.486) 0:05:41.997 ******** 2026-04-09 01:09:13.249418 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.249421 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.249428 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.249431 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249435 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249439 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249443 | orchestrator | 2026-04-09 01:09:13.249446 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-09 01:09:13.249450 | orchestrator | Thursday 09 April 2026 01:06:16 +0000 (0:00:00.493) 0:05:42.491 ******** 2026-04-09 01:09:13.249454 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.249458 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.249461 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.249465 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249469 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249472 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249476 | orchestrator | 2026-04-09 01:09:13.249480 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-09 01:09:13.249483 | orchestrator | Thursday 09 April 2026 01:06:17 +0000 (0:00:00.648) 0:05:43.140 ******** 2026-04-09 01:09:13.249487 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249491 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249495 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.249498 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249502 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.249506 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.249509 | orchestrator | 2026-04-09 01:09:13.249515 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-09 01:09:13.249519 | orchestrator | Thursday 09 April 2026 01:06:19 +0000 (0:00:01.863) 0:05:45.004 ******** 2026-04-09 01:09:13.249523 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249529 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249533 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249536 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.249540 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.249544 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.249548 | orchestrator | 2026-04-09 01:09:13.249551 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-09 01:09:13.249555 | orchestrator | Thursday 09 April 2026 01:06:20 +0000 (0:00:01.805) 0:05:46.809 ******** 2026-04-09 01:09:13.249559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.249564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.249568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.249574 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.249578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.249584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.249591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.249595 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.249599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-09 01:09:13.249603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-09 01:09:13.249610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.249614 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.249618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:09:13.249626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.249630 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:09:13.249638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.249642 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-09 01:09:13.249654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 01:09:13.249658 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249662 | orchestrator | 2026-04-09 01:09:13.249666 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-09 01:09:13.249670 | orchestrator | Thursday 09 April 2026 01:06:22 +0000 (0:00:01.232) 0:05:48.041 ******** 2026-04-09 01:09:13.249674 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-09 01:09:13.249677 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-09 01:09:13.249681 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.249685 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-09 01:09:13.249689 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-09 01:09:13.249692 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.249696 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-09 01:09:13.249700 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-09 01:09:13.249704 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.249707 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-09 01:09:13.249711 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-09 01:09:13.249715 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249719 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-09 01:09:13.249722 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-09 01:09:13.249726 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249730 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-09 01:09:13.249734 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-09 01:09:13.249737 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249741 | orchestrator | 2026-04-09 01:09:13.249745 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-09 01:09:13.249751 | orchestrator | Thursday 09 April 2026 01:06:22 +0000 (0:00:00.796) 0:05:48.838 ******** 2026-04-09 01:09:13.249757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249777 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249799 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-09 01:09:13.249859 | orchestrator | 2026-04-09 01:09:13.249865 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-09 01:09:13.249872 | orchestrator | Thursday 09 April 2026 01:06:26 +0000 (0:00:03.177) 0:05:52.015 ******** 2026-04-09 01:09:13.249878 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.249884 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.249889 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.249893 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.249897 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.249900 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.249904 | orchestrator | 2026-04-09 01:09:13.249908 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:09:13.249912 | orchestrator | Thursday 09 April 2026 01:06:26 +0000 (0:00:00.603) 0:05:52.619 ******** 2026-04-09 01:09:13.249916 | orchestrator | 2026-04-09 01:09:13.249919 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:09:13.249923 | orchestrator | Thursday 09 April 2026 01:06:26 +0000 (0:00:00.140) 0:05:52.759 ******** 2026-04-09 01:09:13.249927 | orchestrator | 2026-04-09 01:09:13.249931 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:09:13.249934 | orchestrator | Thursday 09 April 2026 01:06:26 +0000 (0:00:00.119) 0:05:52.879 ******** 2026-04-09 01:09:13.249938 | orchestrator | 2026-04-09 01:09:13.249942 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:09:13.249946 | orchestrator | Thursday 09 April 2026 01:06:27 +0000 (0:00:00.120) 0:05:52.999 ******** 2026-04-09 01:09:13.249949 | orchestrator | 2026-04-09 01:09:13.249953 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:09:13.249957 | orchestrator | Thursday 09 April 2026 01:06:27 +0000 (0:00:00.124) 0:05:53.124 ******** 2026-04-09 01:09:13.249961 | orchestrator | 2026-04-09 01:09:13.249964 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-09 01:09:13.249968 | orchestrator | Thursday 09 April 2026 01:06:27 +0000 (0:00:00.209) 0:05:53.333 ******** 2026-04-09 01:09:13.249972 | orchestrator | 2026-04-09 01:09:13.249976 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-09 01:09:13.249979 | orchestrator | Thursday 09 April 2026 01:06:27 +0000 (0:00:00.118) 0:05:53.452 ******** 2026-04-09 01:09:13.249983 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:09:13.249991 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.249994 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:09:13.249998 | orchestrator | 2026-04-09 01:09:13.250002 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-09 01:09:13.250006 | orchestrator | Thursday 09 April 2026 01:06:39 +0000 (0:00:11.508) 0:06:04.960 ******** 2026-04-09 01:09:13.250009 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.250033 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:09:13.250037 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:09:13.250041 | orchestrator | 2026-04-09 01:09:13.250045 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-09 01:09:13.250051 | orchestrator | Thursday 09 April 2026 01:06:55 +0000 (0:00:16.010) 0:06:20.970 ******** 2026-04-09 01:09:13.250055 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.250059 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.250062 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.250066 | orchestrator | 2026-04-09 01:09:13.250073 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-09 01:09:13.250077 | orchestrator | Thursday 09 April 2026 01:07:17 +0000 (0:00:22.452) 0:06:43.423 ******** 2026-04-09 01:09:13.250081 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.250084 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.250088 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.250092 | orchestrator | 2026-04-09 01:09:13.250096 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-09 01:09:13.250099 | orchestrator | Thursday 09 April 2026 01:07:50 +0000 (0:00:32.600) 0:07:16.024 ******** 2026-04-09 01:09:13.250103 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.250107 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.250111 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.250114 | orchestrator | 2026-04-09 01:09:13.250118 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-09 01:09:13.250122 | orchestrator | Thursday 09 April 2026 01:07:50 +0000 (0:00:00.753) 0:07:16.778 ******** 2026-04-09 01:09:13.250126 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.250129 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.250133 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.250137 | orchestrator | 2026-04-09 01:09:13.250141 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-09 01:09:13.250145 | orchestrator | Thursday 09 April 2026 01:07:51 +0000 (0:00:00.680) 0:07:17.458 ******** 2026-04-09 01:09:13.250148 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:09:13.250152 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:09:13.250156 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:09:13.250160 | orchestrator | 2026-04-09 01:09:13.250163 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-09 01:09:13.250167 | orchestrator | Thursday 09 April 2026 01:08:08 +0000 (0:00:16.861) 0:07:34.319 ******** 2026-04-09 01:09:13.250171 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.250175 | orchestrator | 2026-04-09 01:09:13.250179 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-09 01:09:13.250182 | orchestrator | Thursday 09 April 2026 01:08:08 +0000 (0:00:00.314) 0:07:34.634 ******** 2026-04-09 01:09:13.250186 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.250190 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.250194 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.250197 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.250201 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.250205 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-09 01:09:13.250209 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 01:09:13.250213 | orchestrator | 2026-04-09 01:09:13.250217 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-09 01:09:13.250223 | orchestrator | Thursday 09 April 2026 01:08:28 +0000 (0:00:20.183) 0:07:54.817 ******** 2026-04-09 01:09:13.250227 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.250230 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.250234 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.250238 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.250242 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.250245 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.250249 | orchestrator | 2026-04-09 01:09:13.250253 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-09 01:09:13.250257 | orchestrator | Thursday 09 April 2026 01:08:34 +0000 (0:00:05.839) 0:08:00.657 ******** 2026-04-09 01:09:13.250260 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.250264 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.250268 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.250272 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.250275 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.250279 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-04-09 01:09:13.250283 | orchestrator | 2026-04-09 01:09:13.250287 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-09 01:09:13.250291 | orchestrator | Thursday 09 April 2026 01:08:36 +0000 (0:00:01.754) 0:08:02.412 ******** 2026-04-09 01:09:13.250294 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 01:09:13.250298 | orchestrator | 2026-04-09 01:09:13.250302 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-09 01:09:13.250306 | orchestrator | Thursday 09 April 2026 01:08:50 +0000 (0:00:13.594) 0:08:16.007 ******** 2026-04-09 01:09:13.250309 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 01:09:13.250313 | orchestrator | 2026-04-09 01:09:13.250317 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-09 01:09:13.250321 | orchestrator | Thursday 09 April 2026 01:08:51 +0000 (0:00:00.897) 0:08:16.904 ******** 2026-04-09 01:09:13.250325 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.250328 | orchestrator | 2026-04-09 01:09:13.250332 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-09 01:09:13.250336 | orchestrator | Thursday 09 April 2026 01:08:51 +0000 (0:00:00.892) 0:08:17.797 ******** 2026-04-09 01:09:13.250340 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 01:09:13.250343 | orchestrator | 2026-04-09 01:09:13.250347 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-09 01:09:13.250351 | orchestrator | Thursday 09 April 2026 01:09:03 +0000 (0:00:11.791) 0:08:29.588 ******** 2026-04-09 01:09:13.250355 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:09:13.250359 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:09:13.250362 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:09:13.250366 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:09:13.250370 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:09:13.250373 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:09:13.250377 | orchestrator | 2026-04-09 01:09:13.250383 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-09 01:09:13.250387 | orchestrator | 2026-04-09 01:09:13.250391 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-09 01:09:13.250397 | orchestrator | Thursday 09 April 2026 01:09:05 +0000 (0:00:01.741) 0:08:31.330 ******** 2026-04-09 01:09:13.250401 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:09:13.250405 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:09:13.250409 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:09:13.250413 | orchestrator | 2026-04-09 01:09:13.250416 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-09 01:09:13.250420 | orchestrator | 2026-04-09 01:09:13.250424 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-09 01:09:13.250431 | orchestrator | Thursday 09 April 2026 01:09:06 +0000 (0:00:01.094) 0:08:32.425 ******** 2026-04-09 01:09:13.250434 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.250438 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.250442 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.250446 | orchestrator | 2026-04-09 01:09:13.250449 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-09 01:09:13.250453 | orchestrator | 2026-04-09 01:09:13.250457 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-09 01:09:13.250461 | orchestrator | Thursday 09 April 2026 01:09:07 +0000 (0:00:00.487) 0:08:32.912 ******** 2026-04-09 01:09:13.250464 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-09 01:09:13.250468 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-09 01:09:13.250472 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-09 01:09:13.250476 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-09 01:09:13.250480 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-09 01:09:13.250484 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-09 01:09:13.250487 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:09:13.250491 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-09 01:09:13.250495 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-09 01:09:13.250499 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-09 01:09:13.250503 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-09 01:09:13.250507 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-09 01:09:13.250510 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-09 01:09:13.250514 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-09 01:09:13.250518 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-09 01:09:13.250522 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-09 01:09:13.250526 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-09 01:09:13.250529 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-09 01:09:13.250533 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-09 01:09:13.250537 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:09:13.250541 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-09 01:09:13.250544 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-09 01:09:13.250548 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-09 01:09:13.250552 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-09 01:09:13.250556 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-09 01:09:13.250559 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-09 01:09:13.250563 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:09:13.250567 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-09 01:09:13.250571 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-09 01:09:13.250575 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-09 01:09:13.250578 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-09 01:09:13.250582 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-09 01:09:13.250586 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-09 01:09:13.250590 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.250594 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.250597 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-09 01:09:13.250601 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-09 01:09:13.250605 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-09 01:09:13.250611 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-09 01:09:13.250615 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-09 01:09:13.250619 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-09 01:09:13.250622 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.250626 | orchestrator | 2026-04-09 01:09:13.250630 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-09 01:09:13.250634 | orchestrator | 2026-04-09 01:09:13.250638 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-09 01:09:13.250641 | orchestrator | Thursday 09 April 2026 01:09:08 +0000 (0:00:01.384) 0:08:34.297 ******** 2026-04-09 01:09:13.250645 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-09 01:09:13.250649 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-09 01:09:13.250653 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.250656 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-09 01:09:13.250660 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-09 01:09:13.250666 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.250670 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-09 01:09:13.250674 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-09 01:09:13.250677 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.250681 | orchestrator | 2026-04-09 01:09:13.250687 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-09 01:09:13.250691 | orchestrator | 2026-04-09 01:09:13.250695 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-09 01:09:13.250699 | orchestrator | Thursday 09 April 2026 01:09:09 +0000 (0:00:00.677) 0:08:34.975 ******** 2026-04-09 01:09:13.250702 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.250706 | orchestrator | 2026-04-09 01:09:13.250710 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-09 01:09:13.250714 | orchestrator | 2026-04-09 01:09:13.250718 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-09 01:09:13.250721 | orchestrator | Thursday 09 April 2026 01:09:09 +0000 (0:00:00.700) 0:08:35.675 ******** 2026-04-09 01:09:13.250725 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:09:13.250729 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:09:13.250733 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:09:13.250736 | orchestrator | 2026-04-09 01:09:13.250740 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:09:13.250744 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:09:13.250748 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-09 01:09:13.250752 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-09 01:09:13.250756 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-09 01:09:13.250760 | orchestrator | testbed-node-3 : ok=46  changed=28  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-09 01:09:13.250764 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-09 01:09:13.250768 | orchestrator | testbed-node-5 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-09 01:09:13.250771 | orchestrator | 2026-04-09 01:09:13.250778 | orchestrator | 2026-04-09 01:09:13.250782 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:09:13.250786 | orchestrator | Thursday 09 April 2026 01:09:10 +0000 (0:00:00.595) 0:08:36.270 ******** 2026-04-09 01:09:13.250789 | orchestrator | =============================================================================== 2026-04-09 01:09:13.250793 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.64s 2026-04-09 01:09:13.250797 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 32.60s 2026-04-09 01:09:13.250801 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.45s 2026-04-09 01:09:13.250805 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.11s 2026-04-09 01:09:13.250808 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.18s 2026-04-09 01:09:13.250812 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.92s 2026-04-09 01:09:13.250816 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.77s 2026-04-09 01:09:13.250820 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 16.86s 2026-04-09 01:09:13.250823 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.01s 2026-04-09 01:09:13.250857 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.95s 2026-04-09 01:09:13.250862 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 14.37s 2026-04-09 01:09:13.250865 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.74s 2026-04-09 01:09:13.250869 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.60s 2026-04-09 01:09:13.250873 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.41s 2026-04-09 01:09:13.250877 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.79s 2026-04-09 01:09:13.250880 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.51s 2026-04-09 01:09:13.250884 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.98s 2026-04-09 01:09:13.250888 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.45s 2026-04-09 01:09:13.250891 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.00s 2026-04-09 01:09:13.250895 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.49s 2026-04-09 01:09:13.250899 | orchestrator | 2026-04-09 01:09:13 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:13.250903 | orchestrator | 2026-04-09 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:16.281658 | orchestrator | 2026-04-09 01:09:16 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:16.281707 | orchestrator | 2026-04-09 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:19.325659 | orchestrator | 2026-04-09 01:09:19 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:19.325707 | orchestrator | 2026-04-09 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:22.368913 | orchestrator | 2026-04-09 01:09:22 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:22.368998 | orchestrator | 2026-04-09 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:25.417665 | orchestrator | 2026-04-09 01:09:25 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:25.417730 | orchestrator | 2026-04-09 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:28.460862 | orchestrator | 2026-04-09 01:09:28 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:28.460931 | orchestrator | 2026-04-09 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:31.500343 | orchestrator | 2026-04-09 01:09:31 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:31.500426 | orchestrator | 2026-04-09 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:34.540936 | orchestrator | 2026-04-09 01:09:34 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:34.540999 | orchestrator | 2026-04-09 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:37.581496 | orchestrator | 2026-04-09 01:09:37 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:37.581566 | orchestrator | 2026-04-09 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:40.630185 | orchestrator | 2026-04-09 01:09:40 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:40.630248 | orchestrator | 2026-04-09 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:43.670998 | orchestrator | 2026-04-09 01:09:43 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:43.671068 | orchestrator | 2026-04-09 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:46.715504 | orchestrator | 2026-04-09 01:09:46 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:46.715581 | orchestrator | 2026-04-09 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:49.762268 | orchestrator | 2026-04-09 01:09:49 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:49.762337 | orchestrator | 2026-04-09 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:52.799540 | orchestrator | 2026-04-09 01:09:52 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:52.799618 | orchestrator | 2026-04-09 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:55.843974 | orchestrator | 2026-04-09 01:09:55 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:55.844063 | orchestrator | 2026-04-09 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:58.882006 | orchestrator | 2026-04-09 01:09:58 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:09:58.882140 | orchestrator | 2026-04-09 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:01.923585 | orchestrator | 2026-04-09 01:10:01 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:01.923654 | orchestrator | 2026-04-09 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:04.961095 | orchestrator | 2026-04-09 01:10:04 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:04.961176 | orchestrator | 2026-04-09 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:07.996321 | orchestrator | 2026-04-09 01:10:07 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:07.996419 | orchestrator | 2026-04-09 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:11.038847 | orchestrator | 2026-04-09 01:10:11 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:11.038964 | orchestrator | 2026-04-09 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:14.075924 | orchestrator | 2026-04-09 01:10:14 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:14.075988 | orchestrator | 2026-04-09 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:17.118111 | orchestrator | 2026-04-09 01:10:17 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:17.118210 | orchestrator | 2026-04-09 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:20.157705 | orchestrator | 2026-04-09 01:10:20 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:20.157854 | orchestrator | 2026-04-09 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:23.200908 | orchestrator | 2026-04-09 01:10:23 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:23.200997 | orchestrator | 2026-04-09 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:26.243268 | orchestrator | 2026-04-09 01:10:26 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:26.243336 | orchestrator | 2026-04-09 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:29.289965 | orchestrator | 2026-04-09 01:10:29 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:29.290094 | orchestrator | 2026-04-09 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:32.328955 | orchestrator | 2026-04-09 01:10:32 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:32.329038 | orchestrator | 2026-04-09 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:35.367039 | orchestrator | 2026-04-09 01:10:35 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:35.367136 | orchestrator | 2026-04-09 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:38.417136 | orchestrator | 2026-04-09 01:10:38 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:38.417225 | orchestrator | 2026-04-09 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:41.457621 | orchestrator | 2026-04-09 01:10:41 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:41.457842 | orchestrator | 2026-04-09 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:44.501199 | orchestrator | 2026-04-09 01:10:44 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:44.501252 | orchestrator | 2026-04-09 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:47.545152 | orchestrator | 2026-04-09 01:10:47 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:47.545200 | orchestrator | 2026-04-09 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:50.590316 | orchestrator | 2026-04-09 01:10:50 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:50.590369 | orchestrator | 2026-04-09 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:53.638121 | orchestrator | 2026-04-09 01:10:53 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:53.638178 | orchestrator | 2026-04-09 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:56.685204 | orchestrator | 2026-04-09 01:10:56 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:56.685287 | orchestrator | 2026-04-09 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:59.723234 | orchestrator | 2026-04-09 01:10:59 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:10:59.723373 | orchestrator | 2026-04-09 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:02.759979 | orchestrator | 2026-04-09 01:11:02 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:02.760040 | orchestrator | 2026-04-09 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:05.802447 | orchestrator | 2026-04-09 01:11:05 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:05.802507 | orchestrator | 2026-04-09 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:08.848393 | orchestrator | 2026-04-09 01:11:08 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:08.848457 | orchestrator | 2026-04-09 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:11.893185 | orchestrator | 2026-04-09 01:11:11 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:11.893235 | orchestrator | 2026-04-09 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:14.935117 | orchestrator | 2026-04-09 01:11:14 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:14.935164 | orchestrator | 2026-04-09 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:17.983967 | orchestrator | 2026-04-09 01:11:17 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:17.984015 | orchestrator | 2026-04-09 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:21.027617 | orchestrator | 2026-04-09 01:11:21 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:21.027694 | orchestrator | 2026-04-09 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:24.073278 | orchestrator | 2026-04-09 01:11:24 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:24.073340 | orchestrator | 2026-04-09 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:27.122097 | orchestrator | 2026-04-09 01:11:27 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:27.122146 | orchestrator | 2026-04-09 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:30.164499 | orchestrator | 2026-04-09 01:11:30 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:30.164566 | orchestrator | 2026-04-09 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:33.215877 | orchestrator | 2026-04-09 01:11:33 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:33.215988 | orchestrator | 2026-04-09 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:36.267488 | orchestrator | 2026-04-09 01:11:36 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:36.267563 | orchestrator | 2026-04-09 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:39.313310 | orchestrator | 2026-04-09 01:11:39 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:39.313364 | orchestrator | 2026-04-09 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:42.360664 | orchestrator | 2026-04-09 01:11:42 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state STARTED 2026-04-09 01:11:42.360765 | orchestrator | 2026-04-09 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:45.412822 | orchestrator | 2026-04-09 01:11:45 | INFO  | Task 65e7275b-30f1-4235-bb69-c1a94f02680f is in state SUCCESS 2026-04-09 01:11:45.412935 | orchestrator | 2026-04-09 01:11:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:11:45.413642 | orchestrator | 2026-04-09 01:11:45.413697 | orchestrator | 2026-04-09 01:11:45.413708 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:11:45.413715 | orchestrator | 2026-04-09 01:11:45.413722 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:11:45.413728 | orchestrator | Thursday 09 April 2026 01:07:08 +0000 (0:00:00.547) 0:00:00.547 ******** 2026-04-09 01:11:45.413735 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:11:45.413742 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:11:45.413746 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:11:45.413750 | orchestrator | 2026-04-09 01:11:45.413754 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:11:45.413758 | orchestrator | Thursday 09 April 2026 01:07:08 +0000 (0:00:00.317) 0:00:00.865 ******** 2026-04-09 01:11:45.413762 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-09 01:11:45.413767 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-09 01:11:45.413771 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-09 01:11:45.413775 | orchestrator | 2026-04-09 01:11:45.413779 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-09 01:11:45.413783 | orchestrator | 2026-04-09 01:11:45.413786 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:11:45.413790 | orchestrator | Thursday 09 April 2026 01:07:08 +0000 (0:00:00.308) 0:00:01.173 ******** 2026-04-09 01:11:45.413794 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:11:45.413799 | orchestrator | 2026-04-09 01:11:45.413803 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-09 01:11:45.413807 | orchestrator | Thursday 09 April 2026 01:07:09 +0000 (0:00:00.706) 0:00:01.879 ******** 2026-04-09 01:11:45.413811 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-09 01:11:45.413815 | orchestrator | 2026-04-09 01:11:45.413818 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-09 01:11:45.413822 | orchestrator | Thursday 09 April 2026 01:07:12 +0000 (0:00:03.429) 0:00:05.309 ******** 2026-04-09 01:11:45.413894 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-09 01:11:45.413908 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-09 01:11:45.413912 | orchestrator | 2026-04-09 01:11:45.413916 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-09 01:11:45.413919 | orchestrator | Thursday 09 April 2026 01:07:18 +0000 (0:00:05.731) 0:00:11.041 ******** 2026-04-09 01:11:45.413923 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-09 01:11:45.413928 | orchestrator | 2026-04-09 01:11:45.413932 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-09 01:11:45.413936 | orchestrator | Thursday 09 April 2026 01:07:21 +0000 (0:00:03.058) 0:00:14.099 ******** 2026-04-09 01:11:45.413940 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-09 01:11:45.413944 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-09 01:11:45.413948 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-09 01:11:45.413951 | orchestrator | 2026-04-09 01:11:45.413955 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-09 01:11:45.413959 | orchestrator | Thursday 09 April 2026 01:07:30 +0000 (0:00:08.653) 0:00:22.753 ******** 2026-04-09 01:11:45.413963 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-09 01:11:45.413967 | orchestrator | 2026-04-09 01:11:45.413971 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-09 01:11:45.413975 | orchestrator | Thursday 09 April 2026 01:07:33 +0000 (0:00:03.343) 0:00:26.097 ******** 2026-04-09 01:11:45.413992 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-09 01:11:45.413996 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-09 01:11:45.414000 | orchestrator | 2026-04-09 01:11:45.414004 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-09 01:11:45.414007 | orchestrator | Thursday 09 April 2026 01:07:39 +0000 (0:00:06.234) 0:00:32.331 ******** 2026-04-09 01:11:45.414011 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-09 01:11:45.414047 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-09 01:11:45.414051 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-09 01:11:45.414055 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-09 01:11:45.414058 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-09 01:11:45.414062 | orchestrator | 2026-04-09 01:11:45.414066 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:11:45.414070 | orchestrator | Thursday 09 April 2026 01:07:55 +0000 (0:00:16.034) 0:00:48.366 ******** 2026-04-09 01:11:45.414074 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:11:45.414077 | orchestrator | 2026-04-09 01:11:45.414081 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-09 01:11:45.414085 | orchestrator | Thursday 09 April 2026 01:07:56 +0000 (0:00:00.700) 0:00:49.066 ******** 2026-04-09 01:11:45.414089 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.414093 | orchestrator | 2026-04-09 01:11:45.414097 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-09 01:11:45.414101 | orchestrator | Thursday 09 April 2026 01:08:01 +0000 (0:00:05.128) 0:00:54.194 ******** 2026-04-09 01:11:45.414104 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.414108 | orchestrator | 2026-04-09 01:11:45.414329 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-09 01:11:45.414345 | orchestrator | Thursday 09 April 2026 01:08:05 +0000 (0:00:03.799) 0:00:57.994 ******** 2026-04-09 01:11:45.414351 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:11:45.414355 | orchestrator | 2026-04-09 01:11:45.414360 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-09 01:11:45.414364 | orchestrator | Thursday 09 April 2026 01:08:08 +0000 (0:00:02.785) 0:01:00.780 ******** 2026-04-09 01:11:45.414368 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-09 01:11:45.414373 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-09 01:11:45.414377 | orchestrator | 2026-04-09 01:11:45.414382 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-09 01:11:45.414386 | orchestrator | Thursday 09 April 2026 01:08:18 +0000 (0:00:09.785) 0:01:10.565 ******** 2026-04-09 01:11:45.414391 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-09 01:11:45.414396 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-09 01:11:45.414401 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-09 01:11:45.414407 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-09 01:11:45.414411 | orchestrator | 2026-04-09 01:11:45.414416 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-09 01:11:45.414421 | orchestrator | Thursday 09 April 2026 01:08:34 +0000 (0:00:16.201) 0:01:26.767 ******** 2026-04-09 01:11:45.414426 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.414430 | orchestrator | 2026-04-09 01:11:45.414434 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-09 01:11:45.414453 | orchestrator | Thursday 09 April 2026 01:08:39 +0000 (0:00:05.576) 0:01:32.343 ******** 2026-04-09 01:11:45.414470 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.414477 | orchestrator | 2026-04-09 01:11:45.414484 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-09 01:11:45.414493 | orchestrator | Thursday 09 April 2026 01:08:45 +0000 (0:00:05.292) 0:01:37.636 ******** 2026-04-09 01:11:45.414502 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:11:45.414508 | orchestrator | 2026-04-09 01:11:45.414514 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-09 01:11:45.414520 | orchestrator | Thursday 09 April 2026 01:08:45 +0000 (0:00:00.535) 0:01:38.171 ******** 2026-04-09 01:11:45.414527 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:11:45.414533 | orchestrator | 2026-04-09 01:11:45.414539 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:11:45.414545 | orchestrator | Thursday 09 April 2026 01:08:49 +0000 (0:00:03.941) 0:01:42.113 ******** 2026-04-09 01:11:45.414551 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:11:45.414558 | orchestrator | 2026-04-09 01:11:45.414564 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-09 01:11:45.414570 | orchestrator | Thursday 09 April 2026 01:08:50 +0000 (0:00:00.821) 0:01:42.935 ******** 2026-04-09 01:11:45.414577 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.414583 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.414589 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.414596 | orchestrator | 2026-04-09 01:11:45.414635 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-09 01:11:45.414643 | orchestrator | Thursday 09 April 2026 01:08:56 +0000 (0:00:06.282) 0:01:49.218 ******** 2026-04-09 01:11:45.414650 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.414656 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.414663 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.414669 | orchestrator | 2026-04-09 01:11:45.414678 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-09 01:11:45.414686 | orchestrator | Thursday 09 April 2026 01:09:01 +0000 (0:00:04.357) 0:01:53.576 ******** 2026-04-09 01:11:45.414743 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.414749 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.414976 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.414987 | orchestrator | 2026-04-09 01:11:45.414994 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-09 01:11:45.415000 | orchestrator | Thursday 09 April 2026 01:09:01 +0000 (0:00:00.714) 0:01:54.290 ******** 2026-04-09 01:11:45.415006 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:11:45.415013 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:11:45.415019 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:11:45.415025 | orchestrator | 2026-04-09 01:11:45.415031 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-09 01:11:45.415037 | orchestrator | Thursday 09 April 2026 01:09:03 +0000 (0:00:01.665) 0:01:55.955 ******** 2026-04-09 01:11:45.415044 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.415050 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.415056 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.415063 | orchestrator | 2026-04-09 01:11:45.415069 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-09 01:11:45.415075 | orchestrator | Thursday 09 April 2026 01:09:04 +0000 (0:00:01.228) 0:01:57.184 ******** 2026-04-09 01:11:45.415081 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.415088 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.415093 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.415097 | orchestrator | 2026-04-09 01:11:45.415103 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-09 01:11:45.415108 | orchestrator | Thursday 09 April 2026 01:09:05 +0000 (0:00:01.176) 0:01:58.361 ******** 2026-04-09 01:11:45.415127 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.415139 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.415144 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.415149 | orchestrator | 2026-04-09 01:11:45.415180 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-09 01:11:45.415187 | orchestrator | Thursday 09 April 2026 01:09:08 +0000 (0:00:02.317) 0:02:00.678 ******** 2026-04-09 01:11:45.415194 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.415200 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.415206 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.415212 | orchestrator | 2026-04-09 01:11:45.415224 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-09 01:11:45.415230 | orchestrator | Thursday 09 April 2026 01:09:10 +0000 (0:00:01.753) 0:02:02.432 ******** 2026-04-09 01:11:45.415235 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:11:45.415242 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:11:45.415247 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:11:45.415253 | orchestrator | 2026-04-09 01:11:45.415259 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-09 01:11:45.415264 | orchestrator | Thursday 09 April 2026 01:09:10 +0000 (0:00:00.670) 0:02:03.103 ******** 2026-04-09 01:11:45.415271 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:11:45.415277 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:11:45.415282 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:11:45.415288 | orchestrator | 2026-04-09 01:11:45.415295 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:11:45.415300 | orchestrator | Thursday 09 April 2026 01:09:14 +0000 (0:00:03.680) 0:02:06.784 ******** 2026-04-09 01:11:45.415308 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:11:45.415313 | orchestrator | 2026-04-09 01:11:45.415319 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-09 01:11:45.415328 | orchestrator | Thursday 09 April 2026 01:09:14 +0000 (0:00:00.568) 0:02:07.353 ******** 2026-04-09 01:11:45.415334 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:11:45.415340 | orchestrator | 2026-04-09 01:11:45.415345 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-09 01:11:45.415353 | orchestrator | Thursday 09 April 2026 01:09:18 +0000 (0:00:03.254) 0:02:10.607 ******** 2026-04-09 01:11:45.415379 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:11:45.415385 | orchestrator | 2026-04-09 01:11:45.415398 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-09 01:11:45.415404 | orchestrator | Thursday 09 April 2026 01:09:21 +0000 (0:00:03.206) 0:02:13.814 ******** 2026-04-09 01:11:45.415411 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-09 01:11:45.415417 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-09 01:11:45.415423 | orchestrator | 2026-04-09 01:11:45.415429 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-09 01:11:45.415434 | orchestrator | Thursday 09 April 2026 01:09:28 +0000 (0:00:07.076) 0:02:20.891 ******** 2026-04-09 01:11:45.415441 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:11:45.415447 | orchestrator | 2026-04-09 01:11:45.415453 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-09 01:11:45.415458 | orchestrator | Thursday 09 April 2026 01:09:32 +0000 (0:00:03.725) 0:02:24.616 ******** 2026-04-09 01:11:45.415464 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:11:45.415470 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:11:45.415476 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:11:45.415481 | orchestrator | 2026-04-09 01:11:45.415488 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-09 01:11:45.415494 | orchestrator | Thursday 09 April 2026 01:09:32 +0000 (0:00:00.267) 0:02:24.883 ******** 2026-04-09 01:11:45.415503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.415543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.415549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.415559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.415565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.415569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.415578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.415584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.415621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.415630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.415640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.415645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.415656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.415661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.415665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.415670 | orchestrator | 2026-04-09 01:11:45.415675 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-09 01:11:45.415680 | orchestrator | Thursday 09 April 2026 01:09:34 +0000 (0:00:02.470) 0:02:27.354 ******** 2026-04-09 01:11:45.415684 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:11:45.415692 | orchestrator | 2026-04-09 01:11:45.415708 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-09 01:11:45.415714 | orchestrator | Thursday 09 April 2026 01:09:35 +0000 (0:00:00.113) 0:02:27.467 ******** 2026-04-09 01:11:45.415718 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:11:45.415723 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:11:45.415727 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:11:45.415731 | orchestrator | 2026-04-09 01:11:45.415736 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-09 01:11:45.415741 | orchestrator | Thursday 09 April 2026 01:09:35 +0000 (0:00:00.256) 0:02:27.724 ******** 2026-04-09 01:11:45.415745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:11:45.415754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:11:45.415776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.415784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.415794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:11:45.415801 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:11:45.415826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:11:45.415834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:11:45.415846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.415859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.415865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:11:45.415872 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:11:45.415879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:11:45.415903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:11:45.415911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.415920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.415933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:11:45.415937 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:11:45.415941 | orchestrator | 2026-04-09 01:11:45.415945 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:11:45.415949 | orchestrator | Thursday 09 April 2026 01:09:35 +0000 (0:00:00.660) 0:02:28.384 ******** 2026-04-09 01:11:45.415953 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:11:45.415957 | orchestrator | 2026-04-09 01:11:45.415961 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-09 01:11:45.415965 | orchestrator | Thursday 09 April 2026 01:09:36 +0000 (0:00:00.679) 0:02:29.064 ******** 2026-04-09 01:11:45.415969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.415986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.415992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.416003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416070 | orchestrator | 2026-04-09 01:11:45.416074 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-09 01:11:45.416078 | orchestrator | Thursday 09 April 2026 01:09:41 +0000 (0:00:05.117) 0:02:34.181 ******** 2026-04-09 01:11:45.416082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:11:45.416092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:11:45.416096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:11:45.416108 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:11:45.416116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:11:45.416126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:11:45.416130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:11:45.416153 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:11:45.416162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:11:45.416169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:11:45.416185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:11:45.416210 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:11:45.416216 | orchestrator | 2026-04-09 01:11:45.416223 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-09 01:11:45.416229 | orchestrator | Thursday 09 April 2026 01:09:42 +0000 (0:00:00.653) 0:02:34.835 ******** 2026-04-09 01:11:45.416239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:11:45.416245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:11:45.416250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:11:45.416282 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:11:45.416292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:11:45.416300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:11:45.416307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:11:45.416333 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:11:45.416338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 01:11:45.416346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 01:11:45.416350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 01:11:45.416358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 01:11:45.416366 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:11:45.416370 | orchestrator | 2026-04-09 01:11:45.416374 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-09 01:11:45.416378 | orchestrator | Thursday 09 April 2026 01:09:43 +0000 (0:00:01.010) 0:02:35.845 ******** 2026-04-09 01:11:45.416386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.416393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.416398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.416402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416472 | orchestrator | 2026-04-09 01:11:45.416476 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-09 01:11:45.416480 | orchestrator | Thursday 09 April 2026 01:09:48 +0000 (0:00:05.315) 0:02:41.161 ******** 2026-04-09 01:11:45.416487 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 01:11:45.416492 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 01:11:45.416496 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-09 01:11:45.416500 | orchestrator | 2026-04-09 01:11:45.416505 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-09 01:11:45.416509 | orchestrator | Thursday 09 April 2026 01:09:50 +0000 (0:00:01.629) 0:02:42.791 ******** 2026-04-09 01:11:45.416513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.416520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.416529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.416534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.416598 | orchestrator | 2026-04-09 01:11:45.416702 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-09 01:11:45.416708 | orchestrator | Thursday 09 April 2026 01:10:05 +0000 (0:00:15.637) 0:02:58.428 ******** 2026-04-09 01:11:45.416712 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.416716 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.416720 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.416724 | orchestrator | 2026-04-09 01:11:45.416728 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-09 01:11:45.416732 | orchestrator | Thursday 09 April 2026 01:10:07 +0000 (0:00:01.879) 0:03:00.307 ******** 2026-04-09 01:11:45.416736 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 01:11:45.416740 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 01:11:45.416748 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 01:11:45.416753 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 01:11:45.416757 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 01:11:45.416761 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 01:11:45.416765 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 01:11:45.416769 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 01:11:45.416773 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 01:11:45.416777 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 01:11:45.416781 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 01:11:45.416784 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 01:11:45.416788 | orchestrator | 2026-04-09 01:11:45.416792 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-09 01:11:45.416796 | orchestrator | Thursday 09 April 2026 01:10:13 +0000 (0:00:05.267) 0:03:05.575 ******** 2026-04-09 01:11:45.416800 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 01:11:45.416804 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 01:11:45.416807 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 01:11:45.416811 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 01:11:45.416815 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 01:11:45.416819 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 01:11:45.416831 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 01:11:45.416835 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 01:11:45.416839 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 01:11:45.416843 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 01:11:45.416847 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 01:11:45.416851 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 01:11:45.416855 | orchestrator | 2026-04-09 01:11:45.416859 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-09 01:11:45.416863 | orchestrator | Thursday 09 April 2026 01:10:18 +0000 (0:00:05.047) 0:03:10.623 ******** 2026-04-09 01:11:45.416867 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-09 01:11:45.416871 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-09 01:11:45.416876 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-09 01:11:45.416908 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-09 01:11:45.416914 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-09 01:11:45.416918 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-09 01:11:45.416922 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-09 01:11:45.416926 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-09 01:11:45.416930 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-09 01:11:45.416934 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-09 01:11:45.416938 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-09 01:11:45.416942 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-09 01:11:45.416945 | orchestrator | 2026-04-09 01:11:45.416949 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-09 01:11:45.416953 | orchestrator | Thursday 09 April 2026 01:10:23 +0000 (0:00:05.193) 0:03:15.816 ******** 2026-04-09 01:11:45.416958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.416965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.416973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 01:11:45.416980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-09 01:11:45.416993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.417000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.417005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.417013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.417021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.417025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-09 01:11:45.417029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.417033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.417041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-09 01:11:45.417049 | orchestrator | 2026-04-09 01:11:45.417053 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-09 01:11:45.417057 | orchestrator | Thursday 09 April 2026 01:10:27 +0000 (0:00:04.318) 0:03:20.134 ******** 2026-04-09 01:11:45.417061 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:11:45.417065 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:11:45.417069 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:11:45.417072 | orchestrator | 2026-04-09 01:11:45.417076 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-09 01:11:45.417080 | orchestrator | Thursday 09 April 2026 01:10:28 +0000 (0:00:00.471) 0:03:20.606 ******** 2026-04-09 01:11:45.417084 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.417088 | orchestrator | 2026-04-09 01:11:45.417092 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-09 01:11:45.417095 | orchestrator | Thursday 09 April 2026 01:10:30 +0000 (0:00:02.199) 0:03:22.806 ******** 2026-04-09 01:11:45.417099 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.417103 | orchestrator | 2026-04-09 01:11:45.417107 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-09 01:11:45.417111 | orchestrator | Thursday 09 April 2026 01:10:32 +0000 (0:00:02.501) 0:03:25.307 ******** 2026-04-09 01:11:45.417114 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.417118 | orchestrator | 2026-04-09 01:11:45.417122 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-09 01:11:45.417126 | orchestrator | Thursday 09 April 2026 01:10:35 +0000 (0:00:02.426) 0:03:27.734 ******** 2026-04-09 01:11:45.417130 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.417134 | orchestrator | 2026-04-09 01:11:45.417138 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-09 01:11:45.417142 | orchestrator | Thursday 09 April 2026 01:10:37 +0000 (0:00:02.439) 0:03:30.173 ******** 2026-04-09 01:11:45.417145 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.417149 | orchestrator | 2026-04-09 01:11:45.417156 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 01:11:45.417161 | orchestrator | Thursday 09 April 2026 01:10:57 +0000 (0:00:20.216) 0:03:50.390 ******** 2026-04-09 01:11:45.417165 | orchestrator | 2026-04-09 01:11:45.417169 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 01:11:45.417173 | orchestrator | Thursday 09 April 2026 01:10:58 +0000 (0:00:00.075) 0:03:50.465 ******** 2026-04-09 01:11:45.417176 | orchestrator | 2026-04-09 01:11:45.417180 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-09 01:11:45.417184 | orchestrator | Thursday 09 April 2026 01:10:58 +0000 (0:00:00.073) 0:03:50.539 ******** 2026-04-09 01:11:45.417188 | orchestrator | 2026-04-09 01:11:45.417192 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-09 01:11:45.417196 | orchestrator | Thursday 09 April 2026 01:10:58 +0000 (0:00:00.066) 0:03:50.606 ******** 2026-04-09 01:11:45.417200 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.417203 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.417207 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.417211 | orchestrator | 2026-04-09 01:11:45.417215 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-09 01:11:45.417219 | orchestrator | Thursday 09 April 2026 01:11:12 +0000 (0:00:13.858) 0:04:04.464 ******** 2026-04-09 01:11:45.417222 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.417226 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.417230 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.417234 | orchestrator | 2026-04-09 01:11:45.417237 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-09 01:11:45.417241 | orchestrator | Thursday 09 April 2026 01:11:17 +0000 (0:00:05.813) 0:04:10.278 ******** 2026-04-09 01:11:45.417245 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.417253 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.417257 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.417260 | orchestrator | 2026-04-09 01:11:45.417264 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-09 01:11:45.417268 | orchestrator | Thursday 09 April 2026 01:11:22 +0000 (0:00:04.619) 0:04:14.898 ******** 2026-04-09 01:11:45.417272 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.417276 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.417279 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.417283 | orchestrator | 2026-04-09 01:11:45.417287 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-09 01:11:45.417291 | orchestrator | Thursday 09 April 2026 01:11:32 +0000 (0:00:09.970) 0:04:24.869 ******** 2026-04-09 01:11:45.417294 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:11:45.417298 | orchestrator | changed: [testbed-node-1] 2026-04-09 01:11:45.417302 | orchestrator | changed: [testbed-node-2] 2026-04-09 01:11:45.417306 | orchestrator | 2026-04-09 01:11:45.417309 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:11:45.417313 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 01:11:45.417318 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 01:11:45.417322 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 01:11:45.417325 | orchestrator | 2026-04-09 01:11:45.417329 | orchestrator | 2026-04-09 01:11:45.417333 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:11:45.417337 | orchestrator | Thursday 09 April 2026 01:11:43 +0000 (0:00:10.972) 0:04:35.841 ******** 2026-04-09 01:11:45.417343 | orchestrator | =============================================================================== 2026-04-09 01:11:45.417347 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.22s 2026-04-09 01:11:45.417351 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.20s 2026-04-09 01:11:45.417355 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.03s 2026-04-09 01:11:45.417359 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.64s 2026-04-09 01:11:45.417363 | orchestrator | octavia : Restart octavia-api container -------------------------------- 13.86s 2026-04-09 01:11:45.417366 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.97s 2026-04-09 01:11:45.417370 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.97s 2026-04-09 01:11:45.417374 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.79s 2026-04-09 01:11:45.417378 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.65s 2026-04-09 01:11:45.417381 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.08s 2026-04-09 01:11:45.417385 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.28s 2026-04-09 01:11:45.417389 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.23s 2026-04-09 01:11:45.417393 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 5.81s 2026-04-09 01:11:45.417397 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.73s 2026-04-09 01:11:45.417401 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.58s 2026-04-09 01:11:45.417404 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.32s 2026-04-09 01:11:45.417408 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.29s 2026-04-09 01:11:45.417413 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.27s 2026-04-09 01:11:45.417424 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.19s 2026-04-09 01:11:45.417428 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.13s 2026-04-09 01:11:48.453684 | orchestrator | 2026-04-09 01:11:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:11:51.500153 | orchestrator | 2026-04-09 01:11:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:11:54.540949 | orchestrator | 2026-04-09 01:11:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:11:57.585266 | orchestrator | 2026-04-09 01:11:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:00.628305 | orchestrator | 2026-04-09 01:12:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:03.674087 | orchestrator | 2026-04-09 01:12:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:06.716512 | orchestrator | 2026-04-09 01:12:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:09.758282 | orchestrator | 2026-04-09 01:12:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:12.798957 | orchestrator | 2026-04-09 01:12:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:15.840490 | orchestrator | 2026-04-09 01:12:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:18.882864 | orchestrator | 2026-04-09 01:12:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:21.924536 | orchestrator | 2026-04-09 01:12:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:24.964765 | orchestrator | 2026-04-09 01:12:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:28.011809 | orchestrator | 2026-04-09 01:12:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:31.051387 | orchestrator | 2026-04-09 01:12:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:34.097415 | orchestrator | 2026-04-09 01:12:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:37.141299 | orchestrator | 2026-04-09 01:12:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:40.174429 | orchestrator | 2026-04-09 01:12:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:43.215613 | orchestrator | 2026-04-09 01:12:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-09 01:12:46.258458 | orchestrator | 2026-04-09 01:12:46.433367 | orchestrator | 2026-04-09 01:12:46.437648 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Apr 9 01:12:46 UTC 2026 2026-04-09 01:12:46.439510 | orchestrator | 2026-04-09 01:12:46.780620 | orchestrator | ok: Runtime: 0:32:12.157216 2026-04-09 01:12:47.054578 | 2026-04-09 01:12:47.054724 | TASK [Bootstrap services] 2026-04-09 01:12:47.831395 | orchestrator | 2026-04-09 01:12:47.831652 | orchestrator | # BOOTSTRAP 2026-04-09 01:12:47.831681 | orchestrator | 2026-04-09 01:12:47.831695 | orchestrator | + set -e 2026-04-09 01:12:47.831706 | orchestrator | + echo 2026-04-09 01:12:47.831718 | orchestrator | + echo '# BOOTSTRAP' 2026-04-09 01:12:47.831734 | orchestrator | + echo 2026-04-09 01:12:47.831774 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-09 01:12:47.843035 | orchestrator | + set -e 2026-04-09 01:12:47.843110 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-09 01:12:52.668889 | orchestrator | 2026-04-09 01:12:52 | INFO  | It takes a moment until task 4305ec8c-f385-474c-95a8-a29a257a6314 (flavor-manager) has been started and output is visible here. 2026-04-09 01:13:01.898115 | orchestrator | 2026-04-09 01:12:57 | INFO  | Flavor SCS-1L-1 created 2026-04-09 01:13:01.898290 | orchestrator | 2026-04-09 01:12:57 | INFO  | Flavor SCS-1L-1-5 created 2026-04-09 01:13:01.898309 | orchestrator | 2026-04-09 01:12:57 | INFO  | Flavor SCS-1V-2 created 2026-04-09 01:13:01.898317 | orchestrator | 2026-04-09 01:12:57 | INFO  | Flavor SCS-1V-2-5 created 2026-04-09 01:13:01.898325 | orchestrator | 2026-04-09 01:12:57 | INFO  | Flavor SCS-1V-4 created 2026-04-09 01:13:01.898331 | orchestrator | 2026-04-09 01:12:58 | INFO  | Flavor SCS-1V-4-10 created 2026-04-09 01:13:01.898339 | orchestrator | 2026-04-09 01:12:58 | INFO  | Flavor SCS-1V-8 created 2026-04-09 01:13:01.898347 | orchestrator | 2026-04-09 01:12:58 | INFO  | Flavor SCS-1V-8-20 created 2026-04-09 01:13:01.898370 | orchestrator | 2026-04-09 01:12:58 | INFO  | Flavor SCS-2V-4 created 2026-04-09 01:13:01.898378 | orchestrator | 2026-04-09 01:12:58 | INFO  | Flavor SCS-2V-4-10 created 2026-04-09 01:13:01.898385 | orchestrator | 2026-04-09 01:12:58 | INFO  | Flavor SCS-2V-8 created 2026-04-09 01:13:01.898391 | orchestrator | 2026-04-09 01:12:59 | INFO  | Flavor SCS-2V-8-20 created 2026-04-09 01:13:01.898398 | orchestrator | 2026-04-09 01:12:59 | INFO  | Flavor SCS-2V-16 created 2026-04-09 01:13:01.898404 | orchestrator | 2026-04-09 01:12:59 | INFO  | Flavor SCS-2V-16-50 created 2026-04-09 01:13:01.898411 | orchestrator | 2026-04-09 01:12:59 | INFO  | Flavor SCS-4V-8 created 2026-04-09 01:13:01.898417 | orchestrator | 2026-04-09 01:12:59 | INFO  | Flavor SCS-4V-8-20 created 2026-04-09 01:13:01.898423 | orchestrator | 2026-04-09 01:12:59 | INFO  | Flavor SCS-4V-16 created 2026-04-09 01:13:01.898430 | orchestrator | 2026-04-09 01:13:00 | INFO  | Flavor SCS-4V-16-50 created 2026-04-09 01:13:01.898436 | orchestrator | 2026-04-09 01:13:00 | INFO  | Flavor SCS-4V-32 created 2026-04-09 01:13:01.898442 | orchestrator | 2026-04-09 01:13:00 | INFO  | Flavor SCS-4V-32-100 created 2026-04-09 01:13:01.898448 | orchestrator | 2026-04-09 01:13:00 | INFO  | Flavor SCS-8V-16 created 2026-04-09 01:13:01.898455 | orchestrator | 2026-04-09 01:13:00 | INFO  | Flavor SCS-8V-16-50 created 2026-04-09 01:13:01.898462 | orchestrator | 2026-04-09 01:13:00 | INFO  | Flavor SCS-8V-32 created 2026-04-09 01:13:01.898468 | orchestrator | 2026-04-09 01:13:00 | INFO  | Flavor SCS-8V-32-100 created 2026-04-09 01:13:01.898474 | orchestrator | 2026-04-09 01:13:00 | INFO  | Flavor SCS-16V-32 created 2026-04-09 01:13:01.898481 | orchestrator | 2026-04-09 01:13:01 | INFO  | Flavor SCS-16V-32-100 created 2026-04-09 01:13:01.898487 | orchestrator | 2026-04-09 01:13:01 | INFO  | Flavor SCS-2V-4-20s created 2026-04-09 01:13:01.898494 | orchestrator | 2026-04-09 01:13:01 | INFO  | Flavor SCS-4V-8-50s created 2026-04-09 01:13:01.898501 | orchestrator | 2026-04-09 01:13:01 | INFO  | Flavor SCS-4V-16-100s created 2026-04-09 01:13:01.898508 | orchestrator | 2026-04-09 01:13:01 | INFO  | Flavor SCS-8V-32-100s created 2026-04-09 01:13:03.478784 | orchestrator | 2026-04-09 01:13:03 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-09 01:13:13.635601 | orchestrator | 2026-04-09 01:13:13 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-09 01:13:13.713333 | orchestrator | 2026-04-09 01:13:13 | INFO  | Task 0e401ac7-103b-49ca-996a-8838e22661fa (bootstrap-basic) was prepared for execution. 2026-04-09 01:13:13.713398 | orchestrator | 2026-04-09 01:13:13 | INFO  | It takes a moment until task 0e401ac7-103b-49ca-996a-8838e22661fa (bootstrap-basic) has been started and output is visible here. 2026-04-09 01:14:01.330983 | orchestrator | 2026-04-09 01:14:01.331088 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-09 01:14:01.331099 | orchestrator | 2026-04-09 01:14:01.331104 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 01:14:01.331108 | orchestrator | Thursday 09 April 2026 01:13:17 +0000 (0:00:00.103) 0:00:00.103 ******** 2026-04-09 01:14:01.331112 | orchestrator | ok: [localhost] 2026-04-09 01:14:01.331118 | orchestrator | 2026-04-09 01:14:01.331122 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-09 01:14:01.331126 | orchestrator | Thursday 09 April 2026 01:13:19 +0000 (0:00:02.003) 0:00:02.107 ******** 2026-04-09 01:14:01.331132 | orchestrator | ok: [localhost] 2026-04-09 01:14:01.331136 | orchestrator | 2026-04-09 01:14:01.331140 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-09 01:14:01.331143 | orchestrator | Thursday 09 April 2026 01:13:28 +0000 (0:00:09.860) 0:00:11.968 ******** 2026-04-09 01:14:01.331147 | orchestrator | changed: [localhost] 2026-04-09 01:14:01.331152 | orchestrator | 2026-04-09 01:14:01.331156 | orchestrator | TASK [Create public network] *************************************************** 2026-04-09 01:14:01.331160 | orchestrator | Thursday 09 April 2026 01:13:37 +0000 (0:00:08.137) 0:00:20.105 ******** 2026-04-09 01:14:01.331164 | orchestrator | changed: [localhost] 2026-04-09 01:14:01.331168 | orchestrator | 2026-04-09 01:14:01.331175 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-09 01:14:01.331179 | orchestrator | Thursday 09 April 2026 01:13:42 +0000 (0:00:05.310) 0:00:25.416 ******** 2026-04-09 01:14:01.331183 | orchestrator | changed: [localhost] 2026-04-09 01:14:01.331187 | orchestrator | 2026-04-09 01:14:01.331191 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-09 01:14:01.331195 | orchestrator | Thursday 09 April 2026 01:13:48 +0000 (0:00:06.365) 0:00:31.782 ******** 2026-04-09 01:14:01.331199 | orchestrator | changed: [localhost] 2026-04-09 01:14:01.331203 | orchestrator | 2026-04-09 01:14:01.331206 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-09 01:14:01.331211 | orchestrator | Thursday 09 April 2026 01:13:53 +0000 (0:00:04.446) 0:00:36.229 ******** 2026-04-09 01:14:01.331217 | orchestrator | changed: [localhost] 2026-04-09 01:14:01.331226 | orchestrator | 2026-04-09 01:14:01.331235 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-09 01:14:01.331251 | orchestrator | Thursday 09 April 2026 01:13:57 +0000 (0:00:04.331) 0:00:40.560 ******** 2026-04-09 01:14:01.331257 | orchestrator | ok: [localhost] 2026-04-09 01:14:01.331262 | orchestrator | 2026-04-09 01:14:01.331268 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:14:01.331274 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 01:14:01.331282 | orchestrator | 2026-04-09 01:14:01.331287 | orchestrator | 2026-04-09 01:14:01.331294 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:14:01.331300 | orchestrator | Thursday 09 April 2026 01:14:01 +0000 (0:00:03.660) 0:00:44.221 ******** 2026-04-09 01:14:01.331306 | orchestrator | =============================================================================== 2026-04-09 01:14:01.331312 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.86s 2026-04-09 01:14:01.331338 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.14s 2026-04-09 01:14:01.331343 | orchestrator | Set public network to default ------------------------------------------- 6.37s 2026-04-09 01:14:01.331347 | orchestrator | Create public network --------------------------------------------------- 5.31s 2026-04-09 01:14:01.331351 | orchestrator | Create public subnet ---------------------------------------------------- 4.45s 2026-04-09 01:14:01.331355 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.33s 2026-04-09 01:14:01.331358 | orchestrator | Create manager role ----------------------------------------------------- 3.66s 2026-04-09 01:14:01.331362 | orchestrator | Gathering Facts --------------------------------------------------------- 2.00s 2026-04-09 01:14:03.275965 | orchestrator | 2026-04-09 01:14:03 | INFO  | It takes a moment until task 8cac04f9-30d0-4179-bf61-ff1eca3b8126 (image-manager) has been started and output is visible here. 2026-04-09 01:14:44.697800 | orchestrator | 2026-04-09 01:14:06 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-09 01:14:44.697899 | orchestrator | 2026-04-09 01:14:06 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-09 01:14:44.697912 | orchestrator | 2026-04-09 01:14:06 | INFO  | Importing image Cirros 0.6.2 2026-04-09 01:14:44.697920 | orchestrator | 2026-04-09 01:14:06 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-09 01:14:44.697930 | orchestrator | 2026-04-09 01:14:08 | INFO  | Waiting for image to leave queued state... 2026-04-09 01:14:44.697938 | orchestrator | 2026-04-09 01:14:10 | INFO  | Waiting for import to complete... 2026-04-09 01:14:44.697945 | orchestrator | 2026-04-09 01:14:20 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-09 01:14:44.697954 | orchestrator | 2026-04-09 01:14:21 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-09 01:14:44.697961 | orchestrator | 2026-04-09 01:14:21 | INFO  | Setting internal_version = 0.6.2 2026-04-09 01:14:44.697969 | orchestrator | 2026-04-09 01:14:21 | INFO  | Setting image_original_user = cirros 2026-04-09 01:14:44.697977 | orchestrator | 2026-04-09 01:14:21 | INFO  | Adding tag os:cirros 2026-04-09 01:14:44.697984 | orchestrator | 2026-04-09 01:14:21 | INFO  | Setting property architecture: x86_64 2026-04-09 01:14:44.698143 | orchestrator | 2026-04-09 01:14:21 | INFO  | Setting property hw_disk_bus: scsi 2026-04-09 01:14:44.698168 | orchestrator | 2026-04-09 01:14:21 | INFO  | Setting property hw_rng_model: virtio 2026-04-09 01:14:44.698176 | orchestrator | 2026-04-09 01:14:22 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-09 01:14:44.698183 | orchestrator | 2026-04-09 01:14:22 | INFO  | Setting property hw_watchdog_action: reset 2026-04-09 01:14:44.698190 | orchestrator | 2026-04-09 01:14:22 | INFO  | Setting property hypervisor_type: qemu 2026-04-09 01:14:44.698207 | orchestrator | 2026-04-09 01:14:22 | INFO  | Setting property os_distro: cirros 2026-04-09 01:14:44.698215 | orchestrator | 2026-04-09 01:14:23 | INFO  | Setting property os_purpose: minimal 2026-04-09 01:14:44.698222 | orchestrator | 2026-04-09 01:14:23 | INFO  | Setting property replace_frequency: never 2026-04-09 01:14:44.698229 | orchestrator | 2026-04-09 01:14:23 | INFO  | Setting property uuid_validity: none 2026-04-09 01:14:44.698236 | orchestrator | 2026-04-09 01:14:23 | INFO  | Setting property provided_until: none 2026-04-09 01:14:44.698244 | orchestrator | 2026-04-09 01:14:23 | INFO  | Setting property image_description: Cirros 2026-04-09 01:14:44.698251 | orchestrator | 2026-04-09 01:14:24 | INFO  | Setting property image_name: Cirros 2026-04-09 01:14:44.698279 | orchestrator | 2026-04-09 01:14:24 | INFO  | Setting property internal_version: 0.6.2 2026-04-09 01:14:44.698287 | orchestrator | 2026-04-09 01:14:24 | INFO  | Setting property image_original_user: cirros 2026-04-09 01:14:44.698294 | orchestrator | 2026-04-09 01:14:24 | INFO  | Setting property os_version: 0.6.2 2026-04-09 01:14:44.698302 | orchestrator | 2026-04-09 01:14:24 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-09 01:14:44.698311 | orchestrator | 2026-04-09 01:14:25 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-09 01:14:44.698318 | orchestrator | 2026-04-09 01:14:25 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-09 01:14:44.698325 | orchestrator | 2026-04-09 01:14:25 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-09 01:14:44.698336 | orchestrator | 2026-04-09 01:14:25 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-09 01:14:44.698345 | orchestrator | 2026-04-09 01:14:25 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-09 01:14:44.698353 | orchestrator | 2026-04-09 01:14:25 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-09 01:14:44.698361 | orchestrator | 2026-04-09 01:14:25 | INFO  | Importing image Cirros 0.6.3 2026-04-09 01:14:44.698368 | orchestrator | 2026-04-09 01:14:25 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-09 01:14:44.698376 | orchestrator | 2026-04-09 01:14:25 | INFO  | Waiting for image to leave queued state... 2026-04-09 01:14:44.698384 | orchestrator | 2026-04-09 01:14:27 | INFO  | Waiting for import to complete... 2026-04-09 01:14:44.698409 | orchestrator | 2026-04-09 01:14:38 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-09 01:14:44.698417 | orchestrator | 2026-04-09 01:14:38 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-09 01:14:44.698425 | orchestrator | 2026-04-09 01:14:38 | INFO  | Setting internal_version = 0.6.3 2026-04-09 01:14:44.698433 | orchestrator | 2026-04-09 01:14:38 | INFO  | Setting image_original_user = cirros 2026-04-09 01:14:44.698440 | orchestrator | 2026-04-09 01:14:38 | INFO  | Adding tag os:cirros 2026-04-09 01:14:44.698448 | orchestrator | 2026-04-09 01:14:38 | INFO  | Setting property architecture: x86_64 2026-04-09 01:14:44.698456 | orchestrator | 2026-04-09 01:14:39 | INFO  | Setting property hw_disk_bus: scsi 2026-04-09 01:14:44.698464 | orchestrator | 2026-04-09 01:14:39 | INFO  | Setting property hw_rng_model: virtio 2026-04-09 01:14:44.698472 | orchestrator | 2026-04-09 01:14:40 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-09 01:14:44.698480 | orchestrator | 2026-04-09 01:14:40 | INFO  | Setting property hw_watchdog_action: reset 2026-04-09 01:14:44.698488 | orchestrator | 2026-04-09 01:14:40 | INFO  | Setting property hypervisor_type: qemu 2026-04-09 01:14:44.698495 | orchestrator | 2026-04-09 01:14:40 | INFO  | Setting property os_distro: cirros 2026-04-09 01:14:44.698503 | orchestrator | 2026-04-09 01:14:40 | INFO  | Setting property os_purpose: minimal 2026-04-09 01:14:44.698511 | orchestrator | 2026-04-09 01:14:41 | INFO  | Setting property replace_frequency: never 2026-04-09 01:14:44.698519 | orchestrator | 2026-04-09 01:14:41 | INFO  | Setting property uuid_validity: none 2026-04-09 01:14:44.698526 | orchestrator | 2026-04-09 01:14:41 | INFO  | Setting property provided_until: none 2026-04-09 01:14:44.698534 | orchestrator | 2026-04-09 01:14:41 | INFO  | Setting property image_description: Cirros 2026-04-09 01:14:44.698548 | orchestrator | 2026-04-09 01:14:42 | INFO  | Setting property image_name: Cirros 2026-04-09 01:14:44.698555 | orchestrator | 2026-04-09 01:14:42 | INFO  | Setting property internal_version: 0.6.3 2026-04-09 01:14:44.698563 | orchestrator | 2026-04-09 01:14:42 | INFO  | Setting property image_original_user: cirros 2026-04-09 01:14:44.698570 | orchestrator | 2026-04-09 01:14:43 | INFO  | Setting property os_version: 0.6.3 2026-04-09 01:14:44.698579 | orchestrator | 2026-04-09 01:14:43 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-09 01:14:44.698587 | orchestrator | 2026-04-09 01:14:43 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-09 01:14:44.698595 | orchestrator | 2026-04-09 01:14:43 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-09 01:14:44.698602 | orchestrator | 2026-04-09 01:14:43 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-09 01:14:44.698610 | orchestrator | 2026-04-09 01:14:43 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-09 01:14:44.923428 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh 2026-04-09 01:14:46.808686 | orchestrator | 2026-04-09 01:14:46 | INFO  | date: 2026-04-07 2026-04-09 01:14:46.808769 | orchestrator | 2026-04-09 01:14:46 | INFO  | image: octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-09 01:14:46.808798 | orchestrator | 2026-04-09 01:14:46 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-09 01:14:46.808805 | orchestrator | 2026-04-09 01:14:46 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2.CHECKSUM 2026-04-09 01:14:47.003125 | orchestrator | 2026-04-09 01:14:47 | INFO  | checksum: c4f8130b9b88752cd3a30f3b2f025c70b2421aeafd1894491d662bda8fc15d00 2026-04-09 01:14:47.092575 | orchestrator | 2026-04-09 01:14:47 | INFO  | It takes a moment until task de1b78a2-076f-4346-a43b-623855f2a25c (image-manager) has been started and output is visible here. 2026-04-09 01:15:48.559335 | orchestrator | 2026-04-09 01:14:49 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-07' 2026-04-09 01:15:48.559390 | orchestrator | 2026-04-09 01:14:49 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2: 200 2026-04-09 01:15:48.559398 | orchestrator | 2026-04-09 01:14:49 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-07 2026-04-09 01:15:48.559402 | orchestrator | 2026-04-09 01:14:49 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-09 01:15:48.559407 | orchestrator | 2026-04-09 01:14:51 | INFO  | Waiting for image to leave queued state... 2026-04-09 01:15:48.559411 | orchestrator | 2026-04-09 01:14:53 | INFO  | Waiting for import to complete... 2026-04-09 01:15:48.559415 | orchestrator | 2026-04-09 01:15:03 | INFO  | Waiting for import to complete... 2026-04-09 01:15:48.559419 | orchestrator | 2026-04-09 01:15:13 | INFO  | Waiting for import to complete... 2026-04-09 01:15:48.559424 | orchestrator | 2026-04-09 01:15:23 | INFO  | Waiting for import to complete... 2026-04-09 01:15:48.559429 | orchestrator | 2026-04-09 01:15:33 | INFO  | Waiting for import to complete... 2026-04-09 01:15:48.559433 | orchestrator | 2026-04-09 01:15:44 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-07' successfully completed, reloading images 2026-04-09 01:15:48.559449 | orchestrator | 2026-04-09 01:15:44 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-09 01:15:48.559453 | orchestrator | 2026-04-09 01:15:44 | INFO  | Setting internal_version = 2026-04-07 2026-04-09 01:15:48.559457 | orchestrator | 2026-04-09 01:15:44 | INFO  | Setting image_original_user = ubuntu 2026-04-09 01:15:48.559461 | orchestrator | 2026-04-09 01:15:44 | INFO  | Adding tag amphora 2026-04-09 01:15:48.559466 | orchestrator | 2026-04-09 01:15:44 | INFO  | Adding tag os:ubuntu 2026-04-09 01:15:48.559469 | orchestrator | 2026-04-09 01:15:44 | INFO  | Setting property architecture: x86_64 2026-04-09 01:15:48.559473 | orchestrator | 2026-04-09 01:15:45 | INFO  | Setting property hw_disk_bus: scsi 2026-04-09 01:15:48.559477 | orchestrator | 2026-04-09 01:15:45 | INFO  | Setting property hw_rng_model: virtio 2026-04-09 01:15:48.559481 | orchestrator | 2026-04-09 01:15:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-09 01:15:48.559485 | orchestrator | 2026-04-09 01:15:45 | INFO  | Setting property hw_watchdog_action: reset 2026-04-09 01:15:48.559488 | orchestrator | 2026-04-09 01:15:45 | INFO  | Setting property hypervisor_type: qemu 2026-04-09 01:15:48.559492 | orchestrator | 2026-04-09 01:15:46 | INFO  | Setting property os_distro: ubuntu 2026-04-09 01:15:48.559496 | orchestrator | 2026-04-09 01:15:46 | INFO  | Setting property replace_frequency: quarterly 2026-04-09 01:15:48.559500 | orchestrator | 2026-04-09 01:15:46 | INFO  | Setting property uuid_validity: last-1 2026-04-09 01:15:48.559504 | orchestrator | 2026-04-09 01:15:46 | INFO  | Setting property provided_until: none 2026-04-09 01:15:48.559507 | orchestrator | 2026-04-09 01:15:46 | INFO  | Setting property os_purpose: network 2026-04-09 01:15:48.559511 | orchestrator | 2026-04-09 01:15:46 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-09 01:15:48.559522 | orchestrator | 2026-04-09 01:15:47 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-09 01:15:48.559526 | orchestrator | 2026-04-09 01:15:47 | INFO  | Setting property internal_version: 2026-04-07 2026-04-09 01:15:48.559530 | orchestrator | 2026-04-09 01:15:47 | INFO  | Setting property image_original_user: ubuntu 2026-04-09 01:15:48.559534 | orchestrator | 2026-04-09 01:15:47 | INFO  | Setting property os_version: 2026-04-07 2026-04-09 01:15:48.559538 | orchestrator | 2026-04-09 01:15:47 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260407.qcow2 2026-04-09 01:15:48.559542 | orchestrator | 2026-04-09 01:15:48 | INFO  | Setting property image_build_date: 2026-04-07 2026-04-09 01:15:48.559548 | orchestrator | 2026-04-09 01:15:48 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-09 01:15:48.559554 | orchestrator | 2026-04-09 01:15:48 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-07' 2026-04-09 01:15:48.559561 | orchestrator | 2026-04-09 01:15:48 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-09 01:15:48.559576 | orchestrator | 2026-04-09 01:15:48 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-09 01:15:48.559582 | orchestrator | 2026-04-09 01:15:48 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-09 01:15:48.559586 | orchestrator | 2026-04-09 01:15:48 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-09 01:15:49.196654 | orchestrator | ok: Runtime: 0:03:01.342236 2026-04-09 01:15:49.220752 | 2026-04-09 01:15:49.220902 | TASK [Run checks] 2026-04-09 01:15:49.919937 | orchestrator | + set -e 2026-04-09 01:15:49.920060 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 01:15:49.920070 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 01:15:49.920080 | orchestrator | ++ INTERACTIVE=false 2026-04-09 01:15:49.920085 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 01:15:49.920089 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 01:15:49.920095 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 01:15:49.921401 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 01:15:49.928554 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:15:49.928611 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:15:49.928617 | orchestrator | + echo 2026-04-09 01:15:49.928621 | orchestrator | 2026-04-09 01:15:49.928625 | orchestrator | # CHECK 2026-04-09 01:15:49.928629 | orchestrator | 2026-04-09 01:15:49.928645 | orchestrator | + echo '# CHECK' 2026-04-09 01:15:49.928649 | orchestrator | + echo 2026-04-09 01:15:49.928657 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 01:15:49.929272 | orchestrator | ++ semver latest 5.0.0 2026-04-09 01:15:49.992504 | orchestrator | 2026-04-09 01:15:49.992596 | orchestrator | ## Containers @ testbed-manager 2026-04-09 01:15:49.992608 | orchestrator | 2026-04-09 01:15:49.992619 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-09 01:15:49.992628 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 01:15:49.992637 | orchestrator | + echo 2026-04-09 01:15:49.992645 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-09 01:15:49.992654 | orchestrator | + echo 2026-04-09 01:15:49.992662 | orchestrator | + osism container testbed-manager ps 2026-04-09 01:15:51.031269 | orchestrator | 2026-04-09 01:15:51 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-09 01:15:51.436883 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 01:15:51.436959 | orchestrator | bd9561aa6f57 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-04-09 01:15:51.436981 | orchestrator | 3c429d40d0a8 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2026-04-09 01:15:51.436993 | orchestrator | 8e33f701af12 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-09 01:15:51.437000 | orchestrator | 0ad5883287af registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_node_exporter 2026-04-09 01:15:51.437009 | orchestrator | 28fb34a9d5f1 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-04-09 01:15:51.437017 | orchestrator | e1961e4ec517 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2026-04-09 01:15:51.437024 | orchestrator | 32bcf073496d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-09 01:15:51.437031 | orchestrator | b606dc662d0e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-09 01:15:51.437054 | orchestrator | e6e8203d6a2f registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-09 01:15:51.437062 | orchestrator | 9d5129ddf622 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2026-04-09 01:15:51.437068 | orchestrator | ba9ad3c89bae registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 29 minutes ago Up 29 minutes openstackclient 2026-04-09 01:15:51.437075 | orchestrator | 6f0901e3d537 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 29 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2026-04-09 01:15:51.437081 | orchestrator | da1d57c4f206 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-09 01:15:51.437088 | orchestrator | bf6a6d3c8f90 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2026-04-09 01:15:51.437095 | orchestrator | f564bbadb7c9 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) kolla-ansible 2026-04-09 01:15:51.437120 | orchestrator | a08b989e6466 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) osism-kubernetes 2026-04-09 01:15:51.437131 | orchestrator | da3d37b51eac registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) osism-ansible 2026-04-09 01:15:51.437137 | orchestrator | 9d6b6d1ea4d2 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) ceph-ansible 2026-04-09 01:15:51.437144 | orchestrator | 24b5006a6d82 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 56 minutes ago Up 36 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-09 01:15:51.437151 | orchestrator | c4f6dbe7d1e9 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-flower-1 2026-04-09 01:15:51.437155 | orchestrator | fa128731c4c3 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-openstack-1 2026-04-09 01:15:51.437158 | orchestrator | 1f8c44ea4525 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-09 01:15:51.437162 | orchestrator | 68be116327b1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-09 01:15:51.437171 | orchestrator | a483d1905476 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 6379/tcp manager-redis-1 2026-04-09 01:15:51.437175 | orchestrator | 8aead7da4792 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-listener-1 2026-04-09 01:15:51.437179 | orchestrator | 348e255fc7e6 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-beat-1 2026-04-09 01:15:51.437183 | orchestrator | d04cd12dd59f registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 36 minutes (healthy) osismclient 2026-04-09 01:15:51.437186 | orchestrator | 6251e791a9cd registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 56 minutes ago Up 36 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-09 01:15:51.437193 | orchestrator | b1a9e61940b8 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-09 01:15:51.561314 | orchestrator | 2026-04-09 01:15:51.561360 | orchestrator | ## Images @ testbed-manager 2026-04-09 01:15:51.561369 | orchestrator | 2026-04-09 01:15:51.561377 | orchestrator | + echo 2026-04-09 01:15:51.561384 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-09 01:15:51.561392 | orchestrator | + echo 2026-04-09 01:15:51.561402 | orchestrator | + osism container testbed-manager images 2026-04-09 01:15:52.993514 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 01:15:52.993589 | orchestrator | registry.osism.tech/osism/osism-ansible latest 8a27fa143461 About an hour ago 638MB 2026-04-09 01:15:52.993595 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 02d18442f8ec About an hour ago 636MB 2026-04-09 01:15:52.993600 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 67004d43352e About an hour ago 1.24GB 2026-04-09 01:15:52.993604 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 5f240ab7f93d About an hour ago 585MB 2026-04-09 01:15:52.993622 | orchestrator | registry.osism.tech/osism/osism latest 2600ae4320d1 About an hour ago 407MB 2026-04-09 01:15:52.993626 | orchestrator | registry.osism.tech/osism/osism-frontend latest 27838c614aea About an hour ago 212MB 2026-04-09 01:15:52.993630 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 4a7da13cbb1b About an hour ago 357MB 2026-04-09 01:15:52.993634 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 c1b3cb67b1fe 21 hours ago 404MB 2026-04-09 01:15:52.993638 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b70ba58fb0aa 21 hours ago 357MB 2026-04-09 01:15:52.993642 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 3d6347c81b05 21 hours ago 308MB 2026-04-09 01:15:52.993646 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e750d96ecfc5 21 hours ago 306MB 2026-04-09 01:15:52.993650 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 7cc0762d03ae 21 hours ago 239MB 2026-04-09 01:15:52.993662 | orchestrator | registry.osism.tech/osism/cephclient reef 46995ad16e22 21 hours ago 453MB 2026-04-09 01:15:52.993670 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3264740a29b5 23 hours ago 265MB 2026-04-09 01:15:52.993695 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b8a664c9cb1b 23 hours ago 579MB 2026-04-09 01:15:52.993703 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e0a7aa0c103d 23 hours ago 668MB 2026-04-09 01:15:52.993709 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 6484c96cb268 23 hours ago 839MB 2026-04-09 01:15:52.993715 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-09 01:15:52.993722 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-09 01:15:52.993728 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-09 01:15:52.993735 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-09 01:15:52.993741 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-09 01:15:52.993748 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-09 01:15:52.993752 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-09 01:15:53.115155 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 01:15:53.115233 | orchestrator | ++ semver latest 5.0.0 2026-04-09 01:15:53.160417 | orchestrator | 2026-04-09 01:15:53.160480 | orchestrator | ## Containers @ testbed-node-0 2026-04-09 01:15:53.160487 | orchestrator | 2026-04-09 01:15:53.160491 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-09 01:15:53.160495 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 01:15:53.160499 | orchestrator | + echo 2026-04-09 01:15:53.160504 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-09 01:15:53.160508 | orchestrator | + echo 2026-04-09 01:15:53.160513 | orchestrator | + osism container testbed-node-0 ps 2026-04-09 01:15:54.554253 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 01:15:54.554315 | orchestrator | 3ccb288faeb4 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-09 01:15:54.554327 | orchestrator | d60307501c13 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-09 01:15:54.554332 | orchestrator | 24d2a48cee3e registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-09 01:15:54.554336 | orchestrator | b6cff561c512 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-09 01:15:54.554340 | orchestrator | 6f6a409f1e09 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-09 01:15:54.554344 | orchestrator | 309ec1537d14 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-09 01:15:54.554348 | orchestrator | c434e6ea52dc registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-09 01:15:54.554361 | orchestrator | 97c5bb6ad31f registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-09 01:15:54.554365 | orchestrator | 60267820feeb registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) placement_api 2026-04-09 01:15:54.554379 | orchestrator | ac9d404ad2d2 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-09 01:15:54.554383 | orchestrator | aa4d534d8292 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-09 01:15:54.554387 | orchestrator | 1d178e7038cb registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-09 01:15:54.554390 | orchestrator | 3365d5a3ef74 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-09 01:15:54.554394 | orchestrator | 6ca544932e52 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-09 01:15:54.554398 | orchestrator | 4169e9c8d727 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-09 01:15:54.554402 | orchestrator | c887ed60eb49 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-09 01:15:54.554406 | orchestrator | f0fbc80db2ed registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-09 01:15:54.554410 | orchestrator | 281560ce31e8 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-09 01:15:54.554414 | orchestrator | a9ad4c8460a8 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-09 01:15:54.554418 | orchestrator | 1747122522f2 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-09 01:15:54.554421 | orchestrator | 1e4a5cfff542 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-09 01:15:54.554435 | orchestrator | 178a097379a5 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-04-09 01:15:54.554439 | orchestrator | dfb234246622 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-09 01:15:54.554445 | orchestrator | 75f2e9740df7 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-04-09 01:15:54.554452 | orchestrator | 4d5ba02a9eec registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-09 01:15:54.554461 | orchestrator | 4b1c7a62a821 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-09 01:15:54.554467 | orchestrator | 8592869d6aeb registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-09 01:15:54.554474 | orchestrator | 5a880d0b7e20 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-09 01:15:54.554489 | orchestrator | e0d4fca03d3e registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-09 01:15:54.554499 | orchestrator | 43be1bd069cc registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-04-09 01:15:54.554504 | orchestrator | 403146e60b03 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-04-09 01:15:54.554510 | orchestrator | 0cb54258742b registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-04-09 01:15:54.554516 | orchestrator | 116ae78bfcac registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-09 01:15:54.554522 | orchestrator | e5a9e9aa77f4 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-04-09 01:15:54.554528 | orchestrator | d8977ac1ce52 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-04-09 01:15:54.554534 | orchestrator | d0180f2e134b registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-09 01:15:54.554541 | orchestrator | 7fb7ee0a517f registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-09 01:15:54.554547 | orchestrator | 8852ece7d2e9 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-04-09 01:15:54.554552 | orchestrator | 899b5e5edf31 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-09 01:15:54.554558 | orchestrator | 5dc8473e0923 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-09 01:15:54.554565 | orchestrator | 1162dd248a62 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2026-04-09 01:15:54.554572 | orchestrator | 1659f6e8f8a3 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-04-09 01:15:54.554578 | orchestrator | 63ebea0a375f registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-09 01:15:54.554584 | orchestrator | 064a60a88c6e registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-09 01:15:54.554596 | orchestrator | ed6a7b485c5b registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-09 01:15:54.554602 | orchestrator | 95a5c70eec13 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-04-09 01:15:54.554608 | orchestrator | a02bb341a056 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-04-09 01:15:54.554613 | orchestrator | d7072f158443 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-0 2026-04-09 01:15:54.554623 | orchestrator | 0b31a33b3b9d registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-04-09 01:15:54.554635 | orchestrator | e04c61b7dbf8 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-04-09 01:15:54.554640 | orchestrator | 003b78b342f0 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-04-09 01:15:54.554647 | orchestrator | 318d2cb87deb registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-04-09 01:15:54.554653 | orchestrator | 79901ddd9fb7 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-09 01:15:54.554660 | orchestrator | e32de0a198ee registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-09 01:15:54.554670 | orchestrator | 90575178874f registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-09 01:15:54.554677 | orchestrator | 3bbad9879160 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-09 01:15:54.554683 | orchestrator | 72a403e441a7 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-09 01:15:54.554689 | orchestrator | 2431330d4690 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-09 01:15:54.554693 | orchestrator | 9dfba35567ce registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-09 01:15:54.687290 | orchestrator | 2026-04-09 01:15:54.687353 | orchestrator | ## Images @ testbed-node-0 2026-04-09 01:15:54.687362 | orchestrator | 2026-04-09 01:15:54.687369 | orchestrator | + echo 2026-04-09 01:15:54.687376 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-09 01:15:54.687383 | orchestrator | + echo 2026-04-09 01:15:54.687389 | orchestrator | + osism container testbed-node-0 images 2026-04-09 01:15:56.128471 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 01:15:56.128528 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 4c5bda7121dd 21 hours ago 266MB 2026-04-09 01:15:56.128534 | orchestrator | registry.osism.tech/kolla/redis 2024.2 1b423135131d 21 hours ago 273MB 2026-04-09 01:15:56.128538 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 20e411de4aa7 21 hours ago 273MB 2026-04-09 01:15:56.128542 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ed0a26f28f7c 21 hours ago 452MB 2026-04-09 01:15:56.128546 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 799699931a41 21 hours ago 298MB 2026-04-09 01:15:56.128550 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b70ba58fb0aa 21 hours ago 357MB 2026-04-09 01:15:56.128555 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 8b7b44f2563a 21 hours ago 292MB 2026-04-09 01:15:56.128559 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 58219dd9eee5 21 hours ago 301MB 2026-04-09 01:15:56.128563 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e750d96ecfc5 21 hours ago 306MB 2026-04-09 01:15:56.128567 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 304562932cfa 21 hours ago 279MB 2026-04-09 01:15:56.128581 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e3352e08634e 21 hours ago 279MB 2026-04-09 01:15:56.128585 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b3cfae2d4a21 21 hours ago 975MB 2026-04-09 01:15:56.128589 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 aefbc46ee397 21 hours ago 1.4GB 2026-04-09 01:15:56.128593 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 61b46b13fe15 21 hours ago 1.41GB 2026-04-09 01:15:56.128597 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 6cafd41453ca 21 hours ago 1.41GB 2026-04-09 01:15:56.128601 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 31ecb4717921 21 hours ago 1.72GB 2026-04-09 01:15:56.128613 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 0e8d7891d417 21 hours ago 990MB 2026-04-09 01:15:56.128617 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 d04f4045e6e0 21 hours ago 991MB 2026-04-09 01:15:56.128621 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 e3fd7619dfad 21 hours ago 991MB 2026-04-09 01:15:56.128625 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 e66da8e4e8e4 21 hours ago 1.16GB 2026-04-09 01:15:56.128629 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 173d7508c3d0 21 hours ago 1.04GB 2026-04-09 01:15:56.128633 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e625c11d2aba 21 hours ago 1.04GB 2026-04-09 01:15:56.128637 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 5e4193e479dd 21 hours ago 1.07GB 2026-04-09 01:15:56.128641 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 133764135858 21 hours ago 1.13GB 2026-04-09 01:15:56.128644 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 dd2b3fb7f1cd 21 hours ago 1.24GB 2026-04-09 01:15:56.128648 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 a5ae6c2a915f 21 hours ago 976MB 2026-04-09 01:15:56.128652 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 d0944801676c 21 hours ago 975MB 2026-04-09 01:15:56.128663 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 5b4036922655 21 hours ago 1.03GB 2026-04-09 01:15:56.128671 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 dbdb26832643 21 hours ago 1.05GB 2026-04-09 01:15:56.128675 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c4d30b7728c1 21 hours ago 1.03GB 2026-04-09 01:15:56.128679 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 47c9d89c9659 21 hours ago 1.05GB 2026-04-09 01:15:56.128682 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 7ecfa7c2d4c0 21 hours ago 1.03GB 2026-04-09 01:15:56.128686 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 cf471ac8c087 21 hours ago 1.1GB 2026-04-09 01:15:56.128690 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 6c2771325ef1 21 hours ago 989MB 2026-04-09 01:15:56.128694 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 1ca54700db6e 21 hours ago 983MB 2026-04-09 01:15:56.128697 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f12ce9cf8572 21 hours ago 984MB 2026-04-09 01:15:56.128714 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 71d34dfb5386 21 hours ago 984MB 2026-04-09 01:15:56.128718 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 49c5d2e5a9c9 21 hours ago 989MB 2026-04-09 01:15:56.128722 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 e0b3465740e7 21 hours ago 984MB 2026-04-09 01:15:56.128726 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 5eb0eb38814b 21 hours ago 990MB 2026-04-09 01:15:56.128732 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ad40fa21c96b 21 hours ago 1.05GB 2026-04-09 01:15:56.128736 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 fdb68ba12480 21 hours ago 974MB 2026-04-09 01:15:56.128740 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 41aed91cb434 21 hours ago 974MB 2026-04-09 01:15:56.128744 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 06b1e3f48771 21 hours ago 974MB 2026-04-09 01:15:56.128748 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 45e213225baf 21 hours ago 973MB 2026-04-09 01:15:56.128754 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 21078444d17b 21 hours ago 1.21GB 2026-04-09 01:15:56.128758 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 43d66ea212d8 21 hours ago 1.37GB 2026-04-09 01:15:56.128762 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7fd0568028b5 21 hours ago 1.21GB 2026-04-09 01:15:56.128765 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9c2a462a150e 21 hours ago 1.21GB 2026-04-09 01:15:56.128769 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 186b5bd87853 21 hours ago 840MB 2026-04-09 01:15:56.128773 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 000506bf22df 21 hours ago 840MB 2026-04-09 01:15:56.128777 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 231bc2a8da3e 21 hours ago 840MB 2026-04-09 01:15:56.128780 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 dacd1a06688c 21 hours ago 840MB 2026-04-09 01:15:56.128784 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 edddd92cc686 23 hours ago 1.56GB 2026-04-09 01:15:56.128788 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 15bb65e2b02e 23 hours ago 1.53GB 2026-04-09 01:15:56.128792 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ddd2e742b66d 23 hours ago 276MB 2026-04-09 01:15:56.128796 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3264740a29b5 23 hours ago 265MB 2026-04-09 01:15:56.128799 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 289da7c7eeb7 23 hours ago 322MB 2026-04-09 01:15:56.128803 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 987bccb7e29c 23 hours ago 1.03GB 2026-04-09 01:15:56.128807 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c28009080316 23 hours ago 274MB 2026-04-09 01:15:56.128812 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d9085bb7b182 23 hours ago 411MB 2026-04-09 01:15:56.128818 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b8a664c9cb1b 23 hours ago 579MB 2026-04-09 01:15:56.128825 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e0a7aa0c103d 23 hours ago 668MB 2026-04-09 01:15:56.128830 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0f4a765fdbd2 23 hours ago 1.15GB 2026-04-09 01:15:56.128835 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 01985efead8e 45 hours ago 1.35GB 2026-04-09 01:15:56.258278 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 01:15:56.259105 | orchestrator | ++ semver latest 5.0.0 2026-04-09 01:15:56.298570 | orchestrator | 2026-04-09 01:15:56.298629 | orchestrator | ## Containers @ testbed-node-1 2026-04-09 01:15:56.298637 | orchestrator | 2026-04-09 01:15:56.298643 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-09 01:15:56.298648 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 01:15:56.298654 | orchestrator | + echo 2026-04-09 01:15:56.298659 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-09 01:15:56.298665 | orchestrator | + echo 2026-04-09 01:15:56.298670 | orchestrator | + osism container testbed-node-1 ps 2026-04-09 01:15:57.739057 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 01:15:57.739120 | orchestrator | c8884805ea16 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-09 01:15:57.739129 | orchestrator | 5f7c089997bc registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-09 01:15:57.739137 | orchestrator | 9d72c86232fa registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-09 01:15:57.739144 | orchestrator | 7be6a0d7d64b registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-09 01:15:57.739151 | orchestrator | d3506bfd9a62 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-09 01:15:57.739170 | orchestrator | c10395e79d60 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-09 01:15:57.739178 | orchestrator | b5f981cbd2e5 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-09 01:15:57.739185 | orchestrator | ae316a0221ad registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-09 01:15:57.739194 | orchestrator | 77427d260c25 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-09 01:15:57.739201 | orchestrator | 72a402819e7b registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-09 01:15:57.739208 | orchestrator | 6d91ce777206 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-09 01:15:57.739215 | orchestrator | 55424c808068 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2026-04-09 01:15:57.739235 | orchestrator | 07745a75bb62 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-09 01:15:57.739242 | orchestrator | 8d0ad39040f4 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-09 01:15:57.739249 | orchestrator | 2c91a75771b3 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-09 01:15:57.739256 | orchestrator | f921dae94914 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-09 01:15:57.739262 | orchestrator | bb2298f68606 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-09 01:15:57.739269 | orchestrator | 28665fcf76a1 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-09 01:15:57.739275 | orchestrator | 2c74fa0ea6d5 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-09 01:15:57.740037 | orchestrator | 457d95196c5e registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-09 01:15:57.740063 | orchestrator | 6b3022d11ef1 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-09 01:15:57.740070 | orchestrator | ff5366273ef8 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-09 01:15:57.740077 | orchestrator | 81322f00931c registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-09 01:15:57.740084 | orchestrator | f32d934fa243 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-04-09 01:15:57.740091 | orchestrator | 22bba03bb1fc registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-09 01:15:57.740097 | orchestrator | 660fa557118b registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-09 01:15:57.740104 | orchestrator | 9fc5d861e406 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-09 01:15:57.740118 | orchestrator | 0ed4bcd7d438 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-09 01:15:57.740125 | orchestrator | 25770132b9c9 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-09 01:15:57.740132 | orchestrator | c0787b0a6690 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-09 01:15:57.740139 | orchestrator | a770de250dd2 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-04-09 01:15:57.740145 | orchestrator | 5c583eae4f29 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-09 01:15:57.740152 | orchestrator | dde450cc192c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-09 01:15:57.740158 | orchestrator | e45c9f865629 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2026-04-09 01:15:57.740165 | orchestrator | 1c6b2e14a873 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-04-09 01:15:57.740171 | orchestrator | 49bd3467fd9f registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-09 01:15:57.740178 | orchestrator | f8c851377892 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-09 01:15:57.740184 | orchestrator | 8332689a5e60 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-09 01:15:57.740191 | orchestrator | bc6ce047b842 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-09 01:15:57.740206 | orchestrator | 80a86f119c60 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-04-09 01:15:57.740213 | orchestrator | 19970dda5da5 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-09 01:15:57.740219 | orchestrator | 9859ca041ae1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2026-04-09 01:15:57.740272 | orchestrator | 5cf533e844c3 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-09 01:15:57.740279 | orchestrator | dcfa9a4199dc registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-09 01:15:57.740286 | orchestrator | 015cc6c46628 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-09 01:15:57.740292 | orchestrator | 05cc07c4f934 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2026-04-09 01:15:57.740299 | orchestrator | b0a612a657da registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2026-04-09 01:15:57.740305 | orchestrator | 8268d142fa68 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-1 2026-04-09 01:15:57.740312 | orchestrator | c511da138fc9 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-04-09 01:15:57.740319 | orchestrator | c5cb6068e29b registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-09 01:15:57.740325 | orchestrator | ea8e86285fd2 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-04-09 01:15:57.740332 | orchestrator | 0ce254d5ae14 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-04-09 01:15:57.740342 | orchestrator | 14fb3c9f6c7d registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-09 01:15:57.740349 | orchestrator | 29b539727f2e registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-09 01:15:57.740355 | orchestrator | 35d404f3548d registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-09 01:15:57.740362 | orchestrator | d579e5d74f55 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-09 01:15:57.740369 | orchestrator | 36fcbe88f48f registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-09 01:15:57.740375 | orchestrator | 76261167a2fb registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-09 01:15:57.740388 | orchestrator | c6ecfbbefbda registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-09 01:15:57.888346 | orchestrator | 2026-04-09 01:15:57.888405 | orchestrator | ## Images @ testbed-node-1 2026-04-09 01:15:57.888414 | orchestrator | 2026-04-09 01:15:57.888421 | orchestrator | + echo 2026-04-09 01:15:57.888428 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-09 01:15:57.888444 | orchestrator | + echo 2026-04-09 01:15:57.888452 | orchestrator | + osism container testbed-node-1 images 2026-04-09 01:15:59.338818 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 01:15:59.338876 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 4c5bda7121dd 21 hours ago 266MB 2026-04-09 01:15:59.338895 | orchestrator | registry.osism.tech/kolla/redis 2024.2 1b423135131d 21 hours ago 273MB 2026-04-09 01:15:59.338901 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 20e411de4aa7 21 hours ago 273MB 2026-04-09 01:15:59.338912 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ed0a26f28f7c 21 hours ago 452MB 2026-04-09 01:15:59.338919 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 799699931a41 21 hours ago 298MB 2026-04-09 01:15:59.338925 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b70ba58fb0aa 21 hours ago 357MB 2026-04-09 01:15:59.338932 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 8b7b44f2563a 21 hours ago 292MB 2026-04-09 01:15:59.338937 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 58219dd9eee5 21 hours ago 301MB 2026-04-09 01:15:59.338944 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e750d96ecfc5 21 hours ago 306MB 2026-04-09 01:15:59.338950 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 304562932cfa 21 hours ago 279MB 2026-04-09 01:15:59.338956 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e3352e08634e 21 hours ago 279MB 2026-04-09 01:15:59.338962 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b3cfae2d4a21 21 hours ago 975MB 2026-04-09 01:15:59.338968 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 aefbc46ee397 21 hours ago 1.4GB 2026-04-09 01:15:59.338981 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 61b46b13fe15 21 hours ago 1.41GB 2026-04-09 01:15:59.338988 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 6cafd41453ca 21 hours ago 1.41GB 2026-04-09 01:15:59.338993 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 31ecb4717921 21 hours ago 1.72GB 2026-04-09 01:15:59.338996 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 0e8d7891d417 21 hours ago 990MB 2026-04-09 01:15:59.339000 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 d04f4045e6e0 21 hours ago 991MB 2026-04-09 01:15:59.339004 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 e3fd7619dfad 21 hours ago 991MB 2026-04-09 01:15:59.339008 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 e66da8e4e8e4 21 hours ago 1.16GB 2026-04-09 01:15:59.339012 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 173d7508c3d0 21 hours ago 1.04GB 2026-04-09 01:15:59.339015 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e625c11d2aba 21 hours ago 1.04GB 2026-04-09 01:15:59.339019 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 5e4193e479dd 21 hours ago 1.07GB 2026-04-09 01:15:59.339023 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 133764135858 21 hours ago 1.13GB 2026-04-09 01:15:59.339026 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 dd2b3fb7f1cd 21 hours ago 1.24GB 2026-04-09 01:15:59.339042 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 5b4036922655 21 hours ago 1.03GB 2026-04-09 01:15:59.339046 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 dbdb26832643 21 hours ago 1.05GB 2026-04-09 01:15:59.339049 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c4d30b7728c1 21 hours ago 1.03GB 2026-04-09 01:15:59.339053 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 47c9d89c9659 21 hours ago 1.05GB 2026-04-09 01:15:59.339057 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 7ecfa7c2d4c0 21 hours ago 1.03GB 2026-04-09 01:15:59.339061 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 cf471ac8c087 21 hours ago 1.1GB 2026-04-09 01:15:59.339065 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 6c2771325ef1 21 hours ago 989MB 2026-04-09 01:15:59.339068 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 1ca54700db6e 21 hours ago 983MB 2026-04-09 01:15:59.339080 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f12ce9cf8572 21 hours ago 984MB 2026-04-09 01:15:59.339084 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 71d34dfb5386 21 hours ago 984MB 2026-04-09 01:15:59.339087 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 49c5d2e5a9c9 21 hours ago 989MB 2026-04-09 01:15:59.339100 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 e0b3465740e7 21 hours ago 984MB 2026-04-09 01:15:59.339104 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 21078444d17b 21 hours ago 1.21GB 2026-04-09 01:15:59.339108 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 43d66ea212d8 21 hours ago 1.37GB 2026-04-09 01:15:59.339112 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7fd0568028b5 21 hours ago 1.21GB 2026-04-09 01:15:59.339115 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9c2a462a150e 21 hours ago 1.21GB 2026-04-09 01:15:59.339119 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 186b5bd87853 21 hours ago 840MB 2026-04-09 01:15:59.339123 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 000506bf22df 21 hours ago 840MB 2026-04-09 01:15:59.339127 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 dacd1a06688c 21 hours ago 840MB 2026-04-09 01:15:59.339130 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 231bc2a8da3e 21 hours ago 840MB 2026-04-09 01:15:59.339134 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 edddd92cc686 23 hours ago 1.56GB 2026-04-09 01:15:59.339138 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 15bb65e2b02e 23 hours ago 1.53GB 2026-04-09 01:15:59.339141 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ddd2e742b66d 23 hours ago 276MB 2026-04-09 01:15:59.339145 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3264740a29b5 23 hours ago 265MB 2026-04-09 01:15:59.339149 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 987bccb7e29c 23 hours ago 1.03GB 2026-04-09 01:15:59.339153 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 289da7c7eeb7 23 hours ago 322MB 2026-04-09 01:15:59.339156 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c28009080316 23 hours ago 274MB 2026-04-09 01:15:59.339160 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d9085bb7b182 23 hours ago 411MB 2026-04-09 01:15:59.339164 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b8a664c9cb1b 23 hours ago 579MB 2026-04-09 01:15:59.339168 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e0a7aa0c103d 23 hours ago 668MB 2026-04-09 01:15:59.339175 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0f4a765fdbd2 23 hours ago 1.15GB 2026-04-09 01:15:59.339179 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 01985efead8e 45 hours ago 1.35GB 2026-04-09 01:15:59.467582 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-09 01:15:59.467687 | orchestrator | ++ semver latest 5.0.0 2026-04-09 01:15:59.504617 | orchestrator | 2026-04-09 01:15:59.504679 | orchestrator | ## Containers @ testbed-node-2 2026-04-09 01:15:59.504689 | orchestrator | 2026-04-09 01:15:59.504695 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-09 01:15:59.504701 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 01:15:59.504708 | orchestrator | + echo 2026-04-09 01:15:59.504714 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-09 01:15:59.504721 | orchestrator | + echo 2026-04-09 01:15:59.504728 | orchestrator | + osism container testbed-node-2 ps 2026-04-09 01:16:00.889955 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-09 01:16:00.890051 | orchestrator | a1084f1ff586 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-09 01:16:00.890065 | orchestrator | f4b99ac84b16 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-09 01:16:00.890073 | orchestrator | 2874a7ed5c0b registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-09 01:16:00.890081 | orchestrator | 225a6fcde11b registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-09 01:16:00.890090 | orchestrator | 36453242f005 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-09 01:16:00.890098 | orchestrator | df7a57e2fdb5 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 6 minutes grafana 2026-04-09 01:16:00.890106 | orchestrator | f77a7aac3dcf registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-09 01:16:00.890114 | orchestrator | 6b80f122837a registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-09 01:16:00.890121 | orchestrator | e5f866c45d82 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-09 01:16:00.890128 | orchestrator | 549d885c6beb registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-09 01:16:00.890136 | orchestrator | aab5f0baa78f registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-09 01:16:00.890143 | orchestrator | ce6f4116cff6 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) neutron_server 2026-04-09 01:16:00.890151 | orchestrator | 587408509f3e registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-09 01:16:00.890159 | orchestrator | e3716ff0f101 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-09 01:16:00.890178 | orchestrator | a6612d2b2368 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-09 01:16:00.890204 | orchestrator | 395d12a33917 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-09 01:16:00.890212 | orchestrator | a0d88d3ddf96 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-09 01:16:00.890220 | orchestrator | 9d941e49d089 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-09 01:16:00.890229 | orchestrator | b9f6f47740da registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-09 01:16:00.890267 | orchestrator | ab2ba688d893 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-09 01:16:00.890275 | orchestrator | a58a07148a35 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-09 01:16:00.890296 | orchestrator | 15426307941c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-09 01:16:00.890305 | orchestrator | 5f5ec8e5d0f1 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-09 01:16:00.890313 | orchestrator | c54e73328b00 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-04-09 01:16:00.890320 | orchestrator | 53e6a3e6b057 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-09 01:16:00.890328 | orchestrator | 83997cabebe8 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-09 01:16:00.890335 | orchestrator | 43c009f0dca5 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-09 01:16:00.890343 | orchestrator | b25118cc47bb registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-09 01:16:00.890351 | orchestrator | f2349b720129 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-09 01:16:00.890358 | orchestrator | 4e494d1f95e3 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-09 01:16:00.890365 | orchestrator | 831f9e420106 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-04-09 01:16:00.890373 | orchestrator | 7c7e13b97ee9 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-09 01:16:00.890381 | orchestrator | 306b154d5e72 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-09 01:16:00.890389 | orchestrator | 83a33a6f8cf2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-04-09 01:16:00.890404 | orchestrator | 5f04fecde456 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-04-09 01:16:00.890412 | orchestrator | e25df7152b05 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-09 01:16:00.890419 | orchestrator | 459f9f29fe7a registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-09 01:16:00.890426 | orchestrator | 43dc52fe8c1b registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-09 01:16:00.890434 | orchestrator | 3a7e39680a34 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-09 01:16:00.890441 | orchestrator | b88508f450e3 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-04-09 01:16:00.890449 | orchestrator | dc3a950cfd63 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-09 01:16:00.890457 | orchestrator | 234cffd29485 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2026-04-09 01:16:00.890465 | orchestrator | ec29e2b842a5 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-09 01:16:00.890473 | orchestrator | 2fb4245c6749 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-09 01:16:00.890488 | orchestrator | 71c2da872e23 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-09 01:16:00.890498 | orchestrator | 1c649a966115 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2026-04-09 01:16:00.890506 | orchestrator | 1651dc518499 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-04-09 01:16:00.890513 | orchestrator | 36d7517b2784 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-2 2026-04-09 01:16:00.890521 | orchestrator | d62d4bae493a registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-09 01:16:00.890534 | orchestrator | f13a26859815 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-04-09 01:16:00.890542 | orchestrator | ddf53fb289dc registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-04-09 01:16:00.890550 | orchestrator | be9b8a5d940e registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-04-09 01:16:00.890557 | orchestrator | 567dd7de18cb registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-09 01:16:00.890565 | orchestrator | 96f150f39887 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-09 01:16:00.890580 | orchestrator | 3e1cba450f1d registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-09 01:16:00.890588 | orchestrator | 9d4a899460f2 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-09 01:16:00.890596 | orchestrator | 3f004532ce4f registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-09 01:16:00.890604 | orchestrator | 60e220a4be41 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-09 01:16:00.890612 | orchestrator | a9914cc55c9b registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-09 01:16:01.025434 | orchestrator | 2026-04-09 01:16:01.025485 | orchestrator | ## Images @ testbed-node-2 2026-04-09 01:16:01.025492 | orchestrator | 2026-04-09 01:16:01.025497 | orchestrator | + echo 2026-04-09 01:16:01.025501 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-09 01:16:01.025506 | orchestrator | + echo 2026-04-09 01:16:01.025509 | orchestrator | + osism container testbed-node-2 images 2026-04-09 01:16:02.417025 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-09 01:16:02.417079 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 4c5bda7121dd 21 hours ago 266MB 2026-04-09 01:16:02.417085 | orchestrator | registry.osism.tech/kolla/redis 2024.2 1b423135131d 21 hours ago 273MB 2026-04-09 01:16:02.417098 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 20e411de4aa7 21 hours ago 273MB 2026-04-09 01:16:02.417102 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ed0a26f28f7c 21 hours ago 452MB 2026-04-09 01:16:02.417106 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 799699931a41 21 hours ago 298MB 2026-04-09 01:16:02.417110 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b70ba58fb0aa 21 hours ago 357MB 2026-04-09 01:16:02.417114 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 8b7b44f2563a 21 hours ago 292MB 2026-04-09 01:16:02.417118 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 58219dd9eee5 21 hours ago 301MB 2026-04-09 01:16:02.417511 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e750d96ecfc5 21 hours ago 306MB 2026-04-09 01:16:02.417536 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 304562932cfa 21 hours ago 279MB 2026-04-09 01:16:02.417540 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e3352e08634e 21 hours ago 279MB 2026-04-09 01:16:02.417544 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b3cfae2d4a21 21 hours ago 975MB 2026-04-09 01:16:02.417548 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 aefbc46ee397 21 hours ago 1.4GB 2026-04-09 01:16:02.417552 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 61b46b13fe15 21 hours ago 1.41GB 2026-04-09 01:16:02.417556 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 6cafd41453ca 21 hours ago 1.41GB 2026-04-09 01:16:02.417559 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 31ecb4717921 21 hours ago 1.72GB 2026-04-09 01:16:02.417563 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 0e8d7891d417 21 hours ago 990MB 2026-04-09 01:16:02.417567 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 d04f4045e6e0 21 hours ago 991MB 2026-04-09 01:16:02.417571 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 e3fd7619dfad 21 hours ago 991MB 2026-04-09 01:16:02.417585 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 e66da8e4e8e4 21 hours ago 1.16GB 2026-04-09 01:16:02.417589 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 173d7508c3d0 21 hours ago 1.04GB 2026-04-09 01:16:02.417593 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e625c11d2aba 21 hours ago 1.04GB 2026-04-09 01:16:02.417596 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 5e4193e479dd 21 hours ago 1.07GB 2026-04-09 01:16:02.417600 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 133764135858 21 hours ago 1.13GB 2026-04-09 01:16:02.417604 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 dd2b3fb7f1cd 21 hours ago 1.24GB 2026-04-09 01:16:02.417608 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 5b4036922655 21 hours ago 1.03GB 2026-04-09 01:16:02.417611 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 dbdb26832643 21 hours ago 1.05GB 2026-04-09 01:16:02.417615 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c4d30b7728c1 21 hours ago 1.03GB 2026-04-09 01:16:02.417619 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 47c9d89c9659 21 hours ago 1.05GB 2026-04-09 01:16:02.417647 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 7ecfa7c2d4c0 21 hours ago 1.03GB 2026-04-09 01:16:02.417653 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 cf471ac8c087 21 hours ago 1.1GB 2026-04-09 01:16:02.417657 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 6c2771325ef1 21 hours ago 989MB 2026-04-09 01:16:02.417660 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 1ca54700db6e 21 hours ago 983MB 2026-04-09 01:16:02.417664 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f12ce9cf8572 21 hours ago 984MB 2026-04-09 01:16:02.417668 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 71d34dfb5386 21 hours ago 984MB 2026-04-09 01:16:02.417672 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 49c5d2e5a9c9 21 hours ago 989MB 2026-04-09 01:16:02.417676 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 e0b3465740e7 21 hours ago 984MB 2026-04-09 01:16:02.417680 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 21078444d17b 21 hours ago 1.21GB 2026-04-09 01:16:02.417683 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 43d66ea212d8 21 hours ago 1.37GB 2026-04-09 01:16:02.417687 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7fd0568028b5 21 hours ago 1.21GB 2026-04-09 01:16:02.417691 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9c2a462a150e 21 hours ago 1.21GB 2026-04-09 01:16:02.417695 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 186b5bd87853 21 hours ago 840MB 2026-04-09 01:16:02.417699 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 000506bf22df 21 hours ago 840MB 2026-04-09 01:16:02.417703 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 dacd1a06688c 21 hours ago 840MB 2026-04-09 01:16:02.417707 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 231bc2a8da3e 21 hours ago 840MB 2026-04-09 01:16:02.417710 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 edddd92cc686 23 hours ago 1.56GB 2026-04-09 01:16:02.417714 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 15bb65e2b02e 23 hours ago 1.53GB 2026-04-09 01:16:02.418008 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ddd2e742b66d 23 hours ago 276MB 2026-04-09 01:16:02.418057 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3264740a29b5 23 hours ago 265MB 2026-04-09 01:16:02.418071 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 289da7c7eeb7 23 hours ago 322MB 2026-04-09 01:16:02.418075 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 987bccb7e29c 23 hours ago 1.03GB 2026-04-09 01:16:02.418079 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c28009080316 23 hours ago 274MB 2026-04-09 01:16:02.418083 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d9085bb7b182 23 hours ago 411MB 2026-04-09 01:16:02.418086 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b8a664c9cb1b 23 hours ago 579MB 2026-04-09 01:16:02.418090 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e0a7aa0c103d 23 hours ago 668MB 2026-04-09 01:16:02.418094 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0f4a765fdbd2 23 hours ago 1.15GB 2026-04-09 01:16:02.418098 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 01985efead8e 45 hours ago 1.35GB 2026-04-09 01:16:02.555414 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-09 01:16:02.564579 | orchestrator | + set -e 2026-04-09 01:16:02.564626 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 01:16:02.565542 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 01:16:02.565570 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 01:16:02.565587 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 01:16:02.565592 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 01:16:02.565597 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 01:16:02.565603 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 01:16:02.565607 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:16:02.565612 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:16:02.565616 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 01:16:02.565620 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 01:16:02.565624 | orchestrator | ++ export ARA=false 2026-04-09 01:16:02.565628 | orchestrator | ++ ARA=false 2026-04-09 01:16:02.565632 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 01:16:02.565635 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 01:16:02.565639 | orchestrator | ++ export TEMPEST=true 2026-04-09 01:16:02.565643 | orchestrator | ++ TEMPEST=true 2026-04-09 01:16:02.565697 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 01:16:02.565708 | orchestrator | ++ IS_ZUUL=true 2026-04-09 01:16:02.565713 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 01:16:02.565720 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 01:16:02.565725 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 01:16:02.565731 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 01:16:02.565737 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 01:16:02.565742 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 01:16:02.565749 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 01:16:02.565755 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 01:16:02.565761 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 01:16:02.565767 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 01:16:02.565774 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 01:16:02.565780 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-09 01:16:02.576613 | orchestrator | + set -e 2026-04-09 01:16:02.576658 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 01:16:02.576664 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 01:16:02.576669 | orchestrator | ++ INTERACTIVE=false 2026-04-09 01:16:02.576673 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 01:16:02.576677 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 01:16:02.576681 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 01:16:02.577952 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 01:16:02.584001 | orchestrator | 2026-04-09 01:16:02.584047 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:16:02.584053 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:16:02.584057 | orchestrator | + echo 2026-04-09 01:16:02.584062 | orchestrator | # Ceph status 2026-04-09 01:16:02.584066 | orchestrator | 2026-04-09 01:16:02.584070 | orchestrator | + echo '# Ceph status' 2026-04-09 01:16:02.584074 | orchestrator | + echo 2026-04-09 01:16:02.584078 | orchestrator | + ceph -s 2026-04-09 01:16:03.159769 | orchestrator | cluster: 2026-04-09 01:16:03.159841 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-09 01:16:03.159852 | orchestrator | health: HEALTH_OK 2026-04-09 01:16:03.159859 | orchestrator | 2026-04-09 01:16:03.159867 | orchestrator | services: 2026-04-09 01:16:03.159874 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 25m) 2026-04-09 01:16:03.159883 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-0, testbed-node-1 2026-04-09 01:16:03.159893 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-09 01:16:03.159899 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2026-04-09 01:16:03.159905 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-09 01:16:03.159912 | orchestrator | 2026-04-09 01:16:03.159918 | orchestrator | data: 2026-04-09 01:16:03.159925 | orchestrator | volumes: 1/1 healthy 2026-04-09 01:16:03.159932 | orchestrator | pools: 14 pools, 401 pgs 2026-04-09 01:16:03.159938 | orchestrator | objects: 555 objects, 2.2 GiB 2026-04-09 01:16:03.159942 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-09 01:16:03.159946 | orchestrator | pgs: 401 active+clean 2026-04-09 01:16:03.159950 | orchestrator | 2026-04-09 01:16:03.202413 | orchestrator | 2026-04-09 01:16:03.202458 | orchestrator | # Ceph versions 2026-04-09 01:16:03.202463 | orchestrator | 2026-04-09 01:16:03.202468 | orchestrator | + echo 2026-04-09 01:16:03.202472 | orchestrator | + echo '# Ceph versions' 2026-04-09 01:16:03.202484 | orchestrator | + echo 2026-04-09 01:16:03.202488 | orchestrator | + ceph versions 2026-04-09 01:16:03.754719 | orchestrator | { 2026-04-09 01:16:03.754787 | orchestrator | "mon": { 2026-04-09 01:16:03.754795 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-09 01:16:03.754801 | orchestrator | }, 2026-04-09 01:16:03.754805 | orchestrator | "mgr": { 2026-04-09 01:16:03.754820 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-09 01:16:03.754832 | orchestrator | }, 2026-04-09 01:16:03.754840 | orchestrator | "osd": { 2026-04-09 01:16:03.754847 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-09 01:16:03.754855 | orchestrator | }, 2026-04-09 01:16:03.754861 | orchestrator | "mds": { 2026-04-09 01:16:03.754868 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-09 01:16:03.754875 | orchestrator | }, 2026-04-09 01:16:03.754881 | orchestrator | "rgw": { 2026-04-09 01:16:03.754889 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-09 01:16:03.754897 | orchestrator | }, 2026-04-09 01:16:03.754905 | orchestrator | "overall": { 2026-04-09 01:16:03.754913 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-09 01:16:03.754920 | orchestrator | } 2026-04-09 01:16:03.754928 | orchestrator | } 2026-04-09 01:16:03.802280 | orchestrator | 2026-04-09 01:16:03.802330 | orchestrator | # Ceph OSD tree 2026-04-09 01:16:03.802336 | orchestrator | 2026-04-09 01:16:03.802340 | orchestrator | + echo 2026-04-09 01:16:03.802345 | orchestrator | + echo '# Ceph OSD tree' 2026-04-09 01:16:03.802357 | orchestrator | + echo 2026-04-09 01:16:03.802366 | orchestrator | + ceph osd df tree 2026-04-09 01:16:04.291093 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-09 01:16:04.291156 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 417 MiB 113 GiB 5.91 1.00 - root default 2026-04-09 01:16:04.291161 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-04-09 01:16:04.291166 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.0 GiB 955 MiB 1 KiB 70 MiB 19 GiB 5.01 0.85 189 up osd.0 2026-04-09 01:16:04.291170 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.80 1.15 201 up osd.3 2026-04-09 01:16:04.291174 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2026-04-09 01:16:04.291178 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.18 1.05 192 up osd.1 2026-04-09 01:16:04.291181 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.63 0.95 196 up osd.4 2026-04-09 01:16:04.291197 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-04-09 01:16:04.291202 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.04 1.19 206 up osd.2 2026-04-09 01:16:04.291205 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 976 MiB 907 MiB 1 KiB 70 MiB 19 GiB 4.77 0.81 186 up osd.5 2026-04-09 01:16:04.291209 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 417 MiB 113 GiB 5.91 2026-04-09 01:16:04.291213 | orchestrator | MIN/MAX VAR: 0.81/1.19 STDDEV: 0.85 2026-04-09 01:16:04.343171 | orchestrator | 2026-04-09 01:16:04.343230 | orchestrator | # Ceph monitor status 2026-04-09 01:16:04.343240 | orchestrator | 2026-04-09 01:16:04.343260 | orchestrator | + echo 2026-04-09 01:16:04.343267 | orchestrator | + echo '# Ceph monitor status' 2026-04-09 01:16:04.343274 | orchestrator | + echo 2026-04-09 01:16:04.343281 | orchestrator | + ceph mon stat 2026-04-09 01:16:04.933051 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 10, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-09 01:16:04.976146 | orchestrator | 2026-04-09 01:16:04.976217 | orchestrator | # Ceph quorum status 2026-04-09 01:16:04.976226 | orchestrator | + echo 2026-04-09 01:16:04.976233 | orchestrator | + echo '# Ceph quorum status' 2026-04-09 01:16:04.976239 | orchestrator | + echo 2026-04-09 01:16:04.976619 | orchestrator | 2026-04-09 01:16:04.976832 | orchestrator | + ceph quorum_status 2026-04-09 01:16:04.977021 | orchestrator | + jq 2026-04-09 01:16:05.563548 | orchestrator | { 2026-04-09 01:16:05.563638 | orchestrator | "election_epoch": 10, 2026-04-09 01:16:05.563651 | orchestrator | "quorum": [ 2026-04-09 01:16:05.563657 | orchestrator | 0, 2026-04-09 01:16:05.563664 | orchestrator | 1, 2026-04-09 01:16:05.563670 | orchestrator | 2 2026-04-09 01:16:05.563676 | orchestrator | ], 2026-04-09 01:16:05.563682 | orchestrator | "quorum_names": [ 2026-04-09 01:16:05.563688 | orchestrator | "testbed-node-0", 2026-04-09 01:16:05.563695 | orchestrator | "testbed-node-1", 2026-04-09 01:16:05.563701 | orchestrator | "testbed-node-2" 2026-04-09 01:16:05.563708 | orchestrator | ], 2026-04-09 01:16:05.563714 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-09 01:16:05.563722 | orchestrator | "quorum_age": 1527, 2026-04-09 01:16:05.563729 | orchestrator | "features": { 2026-04-09 01:16:05.563735 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-09 01:16:05.563743 | orchestrator | "quorum_mon": [ 2026-04-09 01:16:05.563749 | orchestrator | "kraken", 2026-04-09 01:16:05.563756 | orchestrator | "luminous", 2026-04-09 01:16:05.563762 | orchestrator | "mimic", 2026-04-09 01:16:05.563768 | orchestrator | "osdmap-prune", 2026-04-09 01:16:05.563774 | orchestrator | "nautilus", 2026-04-09 01:16:05.563780 | orchestrator | "octopus", 2026-04-09 01:16:05.563787 | orchestrator | "pacific", 2026-04-09 01:16:05.563794 | orchestrator | "elector-pinging", 2026-04-09 01:16:05.563800 | orchestrator | "quincy", 2026-04-09 01:16:05.563806 | orchestrator | "reef" 2026-04-09 01:16:05.563813 | orchestrator | ] 2026-04-09 01:16:05.563819 | orchestrator | }, 2026-04-09 01:16:05.563825 | orchestrator | "monmap": { 2026-04-09 01:16:05.563832 | orchestrator | "epoch": 1, 2026-04-09 01:16:05.563838 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-09 01:16:05.563856 | orchestrator | "modified": "2026-04-09T00:50:08.912881Z", 2026-04-09 01:16:05.563863 | orchestrator | "created": "2026-04-09T00:50:08.912881Z", 2026-04-09 01:16:05.563887 | orchestrator | "min_mon_release": 18, 2026-04-09 01:16:05.563894 | orchestrator | "min_mon_release_name": "reef", 2026-04-09 01:16:05.563908 | orchestrator | "election_strategy": 1, 2026-04-09 01:16:05.563914 | orchestrator | "disallowed_leaders": "", 2026-04-09 01:16:05.563921 | orchestrator | "stretch_mode": false, 2026-04-09 01:16:05.563927 | orchestrator | "tiebreaker_mon": "", 2026-04-09 01:16:05.563933 | orchestrator | "removed_ranks": "", 2026-04-09 01:16:05.563940 | orchestrator | "features": { 2026-04-09 01:16:05.563947 | orchestrator | "persistent": [ 2026-04-09 01:16:05.563953 | orchestrator | "kraken", 2026-04-09 01:16:05.563959 | orchestrator | "luminous", 2026-04-09 01:16:05.563966 | orchestrator | "mimic", 2026-04-09 01:16:05.563972 | orchestrator | "osdmap-prune", 2026-04-09 01:16:05.564003 | orchestrator | "nautilus", 2026-04-09 01:16:05.564010 | orchestrator | "octopus", 2026-04-09 01:16:05.564016 | orchestrator | "pacific", 2026-04-09 01:16:05.564022 | orchestrator | "elector-pinging", 2026-04-09 01:16:05.564029 | orchestrator | "quincy", 2026-04-09 01:16:05.564035 | orchestrator | "reef" 2026-04-09 01:16:05.564041 | orchestrator | ], 2026-04-09 01:16:05.564047 | orchestrator | "optional": [] 2026-04-09 01:16:05.564053 | orchestrator | }, 2026-04-09 01:16:05.564059 | orchestrator | "mons": [ 2026-04-09 01:16:05.564065 | orchestrator | { 2026-04-09 01:16:05.564071 | orchestrator | "rank": 0, 2026-04-09 01:16:05.564078 | orchestrator | "name": "testbed-node-0", 2026-04-09 01:16:05.564084 | orchestrator | "public_addrs": { 2026-04-09 01:16:05.564091 | orchestrator | "addrvec": [ 2026-04-09 01:16:05.564097 | orchestrator | { 2026-04-09 01:16:05.564103 | orchestrator | "type": "v2", 2026-04-09 01:16:05.564110 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-09 01:16:05.564116 | orchestrator | "nonce": 0 2026-04-09 01:16:05.564123 | orchestrator | }, 2026-04-09 01:16:05.564129 | orchestrator | { 2026-04-09 01:16:05.564135 | orchestrator | "type": "v1", 2026-04-09 01:16:05.564142 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-09 01:16:05.564148 | orchestrator | "nonce": 0 2026-04-09 01:16:05.564154 | orchestrator | } 2026-04-09 01:16:05.564160 | orchestrator | ] 2026-04-09 01:16:05.564167 | orchestrator | }, 2026-04-09 01:16:05.564173 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-09 01:16:05.564179 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-09 01:16:05.564186 | orchestrator | "priority": 0, 2026-04-09 01:16:05.564192 | orchestrator | "weight": 0, 2026-04-09 01:16:05.564199 | orchestrator | "crush_location": "{}" 2026-04-09 01:16:05.564205 | orchestrator | }, 2026-04-09 01:16:05.564211 | orchestrator | { 2026-04-09 01:16:05.564217 | orchestrator | "rank": 1, 2026-04-09 01:16:05.564224 | orchestrator | "name": "testbed-node-1", 2026-04-09 01:16:05.564230 | orchestrator | "public_addrs": { 2026-04-09 01:16:05.564237 | orchestrator | "addrvec": [ 2026-04-09 01:16:05.564242 | orchestrator | { 2026-04-09 01:16:05.564266 | orchestrator | "type": "v2", 2026-04-09 01:16:05.564273 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-09 01:16:05.564279 | orchestrator | "nonce": 0 2026-04-09 01:16:05.564285 | orchestrator | }, 2026-04-09 01:16:05.564292 | orchestrator | { 2026-04-09 01:16:05.564298 | orchestrator | "type": "v1", 2026-04-09 01:16:05.564304 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-09 01:16:05.564310 | orchestrator | "nonce": 0 2026-04-09 01:16:05.564316 | orchestrator | } 2026-04-09 01:16:05.564322 | orchestrator | ] 2026-04-09 01:16:05.564329 | orchestrator | }, 2026-04-09 01:16:05.564407 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-09 01:16:05.564417 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-09 01:16:05.564423 | orchestrator | "priority": 0, 2026-04-09 01:16:05.564430 | orchestrator | "weight": 0, 2026-04-09 01:16:05.564436 | orchestrator | "crush_location": "{}" 2026-04-09 01:16:05.564443 | orchestrator | }, 2026-04-09 01:16:05.564449 | orchestrator | { 2026-04-09 01:16:05.564456 | orchestrator | "rank": 2, 2026-04-09 01:16:05.564462 | orchestrator | "name": "testbed-node-2", 2026-04-09 01:16:05.564469 | orchestrator | "public_addrs": { 2026-04-09 01:16:05.564475 | orchestrator | "addrvec": [ 2026-04-09 01:16:05.564482 | orchestrator | { 2026-04-09 01:16:05.564488 | orchestrator | "type": "v2", 2026-04-09 01:16:05.564495 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-09 01:16:05.564502 | orchestrator | "nonce": 0 2026-04-09 01:16:05.564508 | orchestrator | }, 2026-04-09 01:16:05.564514 | orchestrator | { 2026-04-09 01:16:05.564521 | orchestrator | "type": "v1", 2026-04-09 01:16:05.564527 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-09 01:16:05.564534 | orchestrator | "nonce": 0 2026-04-09 01:16:05.564540 | orchestrator | } 2026-04-09 01:16:05.564547 | orchestrator | ] 2026-04-09 01:16:05.564553 | orchestrator | }, 2026-04-09 01:16:05.564560 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-09 01:16:05.564566 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-09 01:16:05.564572 | orchestrator | "priority": 0, 2026-04-09 01:16:05.564579 | orchestrator | "weight": 0, 2026-04-09 01:16:05.564586 | orchestrator | "crush_location": "{}" 2026-04-09 01:16:05.564600 | orchestrator | } 2026-04-09 01:16:05.564607 | orchestrator | ] 2026-04-09 01:16:05.564613 | orchestrator | } 2026-04-09 01:16:05.564619 | orchestrator | } 2026-04-09 01:16:05.564712 | orchestrator | 2026-04-09 01:16:05.564721 | orchestrator | # Ceph free space status 2026-04-09 01:16:05.564727 | orchestrator | 2026-04-09 01:16:05.564733 | orchestrator | + echo 2026-04-09 01:16:05.564739 | orchestrator | + echo '# Ceph free space status' 2026-04-09 01:16:05.564746 | orchestrator | + echo 2026-04-09 01:16:05.564752 | orchestrator | + ceph df 2026-04-09 01:16:06.155059 | orchestrator | --- RAW STORAGE --- 2026-04-09 01:16:06.155138 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-09 01:16:06.155145 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-04-09 01:16:06.155150 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-04-09 01:16:06.155154 | orchestrator | 2026-04-09 01:16:06.155159 | orchestrator | --- POOLS --- 2026-04-09 01:16:06.155163 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-09 01:16:06.155169 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-09 01:16:06.155173 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-09 01:16:06.155177 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-09 01:16:06.155181 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-09 01:16:06.155186 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-09 01:16:06.155189 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-09 01:16:06.155193 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-09 01:16:06.155197 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-09 01:16:06.155201 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2026-04-09 01:16:06.155205 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 01:16:06.155209 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 01:16:06.155213 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-04-09 01:16:06.155217 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 01:16:06.155221 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-09 01:16:06.201280 | orchestrator | ++ semver latest 5.0.0 2026-04-09 01:16:06.249027 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-09 01:16:06.249102 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-09 01:16:06.249109 | orchestrator | + osism apply facts 2026-04-09 01:16:17.644724 | orchestrator | 2026-04-09 01:16:17 | INFO  | Prepare task for execution of facts. 2026-04-09 01:16:17.728715 | orchestrator | 2026-04-09 01:16:17 | INFO  | Task f885c42c-5427-42d4-8690-a17a2b0337cf (facts) was prepared for execution. 2026-04-09 01:16:17.728810 | orchestrator | 2026-04-09 01:16:17 | INFO  | It takes a moment until task f885c42c-5427-42d4-8690-a17a2b0337cf (facts) has been started and output is visible here. 2026-04-09 01:16:29.914057 | orchestrator | 2026-04-09 01:16:29.914114 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 01:16:29.914124 | orchestrator | 2026-04-09 01:16:29.914131 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 01:16:29.914138 | orchestrator | Thursday 09 April 2026 01:16:21 +0000 (0:00:00.351) 0:00:00.351 ******** 2026-04-09 01:16:29.914145 | orchestrator | ok: [testbed-manager] 2026-04-09 01:16:29.914151 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:16:29.914158 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:16:29.914163 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:16:29.914166 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:16:29.914170 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:16:29.914174 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:16:29.914178 | orchestrator | 2026-04-09 01:16:29.914182 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 01:16:29.914200 | orchestrator | Thursday 09 April 2026 01:16:22 +0000 (0:00:01.399) 0:00:01.750 ******** 2026-04-09 01:16:29.914205 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:16:29.914215 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:16:29.914219 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:16:29.914222 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:16:29.914226 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:16:29.914230 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:16:29.914234 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:16:29.914237 | orchestrator | 2026-04-09 01:16:29.914241 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 01:16:29.914245 | orchestrator | 2026-04-09 01:16:29.914249 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 01:16:29.914253 | orchestrator | Thursday 09 April 2026 01:16:23 +0000 (0:00:01.265) 0:00:03.015 ******** 2026-04-09 01:16:29.914256 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:16:29.914260 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:16:29.914264 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:16:29.914268 | orchestrator | ok: [testbed-manager] 2026-04-09 01:16:29.914277 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:16:29.914281 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:16:29.914284 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:16:29.914288 | orchestrator | 2026-04-09 01:16:29.914292 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 01:16:29.914296 | orchestrator | 2026-04-09 01:16:29.914300 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 01:16:29.914303 | orchestrator | Thursday 09 April 2026 01:16:28 +0000 (0:00:05.202) 0:00:08.217 ******** 2026-04-09 01:16:29.914341 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:16:29.914349 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:16:29.914355 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:16:29.914368 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:16:29.914372 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:16:29.914376 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:16:29.914380 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:16:29.914387 | orchestrator | 2026-04-09 01:16:29.914394 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:16:29.914399 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:29.914406 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:29.914412 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:29.914418 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:29.914423 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:29.914430 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:29.914436 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:16:29.914443 | orchestrator | 2026-04-09 01:16:29.914449 | orchestrator | 2026-04-09 01:16:29.914455 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:16:29.914462 | orchestrator | Thursday 09 April 2026 01:16:29 +0000 (0:00:00.710) 0:00:08.928 ******** 2026-04-09 01:16:29.914482 | orchestrator | =============================================================================== 2026-04-09 01:16:29.914489 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.20s 2026-04-09 01:16:29.914506 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.40s 2026-04-09 01:16:29.914510 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2026-04-09 01:16:29.914513 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.71s 2026-04-09 01:16:30.080215 | orchestrator | + osism validate ceph-mons 2026-04-09 01:17:00.944312 | orchestrator | 2026-04-09 01:17:00.944472 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-09 01:17:00.944506 | orchestrator | 2026-04-09 01:17:00.944511 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 01:17:00.944516 | orchestrator | Thursday 09 April 2026 01:16:45 +0000 (0:00:00.519) 0:00:00.519 ******** 2026-04-09 01:17:00.944522 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:00.944526 | orchestrator | 2026-04-09 01:17:00.944531 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 01:17:00.944535 | orchestrator | Thursday 09 April 2026 01:16:46 +0000 (0:00:00.992) 0:00:01.511 ******** 2026-04-09 01:17:00.944539 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:00.944544 | orchestrator | 2026-04-09 01:17:00.944548 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 01:17:00.944552 | orchestrator | Thursday 09 April 2026 01:16:46 +0000 (0:00:00.723) 0:00:02.235 ******** 2026-04-09 01:17:00.944556 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.944561 | orchestrator | 2026-04-09 01:17:00.944565 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-09 01:17:00.944569 | orchestrator | Thursday 09 April 2026 01:16:46 +0000 (0:00:00.113) 0:00:02.348 ******** 2026-04-09 01:17:00.944572 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.944576 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:00.944580 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:00.944584 | orchestrator | 2026-04-09 01:17:00.944588 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-09 01:17:00.944591 | orchestrator | Thursday 09 April 2026 01:16:47 +0000 (0:00:00.306) 0:00:02.654 ******** 2026-04-09 01:17:00.944596 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:00.944599 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:00.944603 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.944607 | orchestrator | 2026-04-09 01:17:00.944611 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-09 01:17:00.944615 | orchestrator | Thursday 09 April 2026 01:16:48 +0000 (0:00:01.621) 0:00:04.276 ******** 2026-04-09 01:17:00.944619 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.944623 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:17:00.944627 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:17:00.944631 | orchestrator | 2026-04-09 01:17:00.944635 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-09 01:17:00.944639 | orchestrator | Thursday 09 April 2026 01:16:49 +0000 (0:00:00.291) 0:00:04.567 ******** 2026-04-09 01:17:00.944643 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.944647 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:00.944651 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:00.944655 | orchestrator | 2026-04-09 01:17:00.944659 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:17:00.944663 | orchestrator | Thursday 09 April 2026 01:16:49 +0000 (0:00:00.311) 0:00:04.878 ******** 2026-04-09 01:17:00.944666 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.944670 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:00.944674 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:00.944678 | orchestrator | 2026-04-09 01:17:00.944682 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-09 01:17:00.944685 | orchestrator | Thursday 09 April 2026 01:16:49 +0000 (0:00:00.286) 0:00:05.165 ******** 2026-04-09 01:17:00.944689 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.944710 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:17:00.944714 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:17:00.944718 | orchestrator | 2026-04-09 01:17:00.944722 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-09 01:17:00.944726 | orchestrator | Thursday 09 April 2026 01:16:50 +0000 (0:00:00.440) 0:00:05.606 ******** 2026-04-09 01:17:00.944730 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.944733 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:00.944737 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:00.944741 | orchestrator | 2026-04-09 01:17:00.944756 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:17:00.944760 | orchestrator | Thursday 09 April 2026 01:16:50 +0000 (0:00:00.329) 0:00:05.936 ******** 2026-04-09 01:17:00.944764 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.944768 | orchestrator | 2026-04-09 01:17:00.944772 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:17:00.944776 | orchestrator | Thursday 09 April 2026 01:16:50 +0000 (0:00:00.258) 0:00:06.194 ******** 2026-04-09 01:17:00.944780 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.944783 | orchestrator | 2026-04-09 01:17:00.944788 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:17:00.944792 | orchestrator | Thursday 09 April 2026 01:16:50 +0000 (0:00:00.244) 0:00:06.438 ******** 2026-04-09 01:17:00.944796 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.944800 | orchestrator | 2026-04-09 01:17:00.944803 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:00.944807 | orchestrator | Thursday 09 April 2026 01:16:51 +0000 (0:00:00.247) 0:00:06.686 ******** 2026-04-09 01:17:00.944811 | orchestrator | 2026-04-09 01:17:00.944815 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:00.944818 | orchestrator | Thursday 09 April 2026 01:16:51 +0000 (0:00:00.093) 0:00:06.780 ******** 2026-04-09 01:17:00.944822 | orchestrator | 2026-04-09 01:17:00.944826 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:00.944830 | orchestrator | Thursday 09 April 2026 01:16:51 +0000 (0:00:00.070) 0:00:06.850 ******** 2026-04-09 01:17:00.944834 | orchestrator | 2026-04-09 01:17:00.944838 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:17:00.944842 | orchestrator | Thursday 09 April 2026 01:16:51 +0000 (0:00:00.218) 0:00:07.068 ******** 2026-04-09 01:17:00.944845 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.944849 | orchestrator | 2026-04-09 01:17:00.944853 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-09 01:17:00.944857 | orchestrator | Thursday 09 April 2026 01:16:51 +0000 (0:00:00.252) 0:00:07.321 ******** 2026-04-09 01:17:00.944861 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.944864 | orchestrator | 2026-04-09 01:17:00.944881 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-09 01:17:00.944886 | orchestrator | Thursday 09 April 2026 01:16:52 +0000 (0:00:00.275) 0:00:07.596 ******** 2026-04-09 01:17:00.944891 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.944896 | orchestrator | 2026-04-09 01:17:00.944900 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-09 01:17:00.944905 | orchestrator | Thursday 09 April 2026 01:16:52 +0000 (0:00:00.113) 0:00:07.709 ******** 2026-04-09 01:17:00.944909 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:17:00.944914 | orchestrator | 2026-04-09 01:17:00.944919 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-09 01:17:00.944924 | orchestrator | Thursday 09 April 2026 01:16:53 +0000 (0:00:01.747) 0:00:09.457 ******** 2026-04-09 01:17:00.944928 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.944932 | orchestrator | 2026-04-09 01:17:00.944937 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-09 01:17:00.944941 | orchestrator | Thursday 09 April 2026 01:16:54 +0000 (0:00:00.330) 0:00:09.787 ******** 2026-04-09 01:17:00.944950 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.944954 | orchestrator | 2026-04-09 01:17:00.944959 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-09 01:17:00.944964 | orchestrator | Thursday 09 April 2026 01:16:54 +0000 (0:00:00.142) 0:00:09.929 ******** 2026-04-09 01:17:00.944968 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.944973 | orchestrator | 2026-04-09 01:17:00.944977 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-09 01:17:00.944982 | orchestrator | Thursday 09 April 2026 01:16:54 +0000 (0:00:00.298) 0:00:10.227 ******** 2026-04-09 01:17:00.944989 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.944994 | orchestrator | 2026-04-09 01:17:00.944998 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-09 01:17:00.945003 | orchestrator | Thursday 09 April 2026 01:16:55 +0000 (0:00:00.292) 0:00:10.520 ******** 2026-04-09 01:17:00.945007 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.945012 | orchestrator | 2026-04-09 01:17:00.945016 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-09 01:17:00.945021 | orchestrator | Thursday 09 April 2026 01:16:55 +0000 (0:00:00.114) 0:00:10.634 ******** 2026-04-09 01:17:00.945025 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.945030 | orchestrator | 2026-04-09 01:17:00.945034 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-09 01:17:00.945039 | orchestrator | Thursday 09 April 2026 01:16:55 +0000 (0:00:00.113) 0:00:10.748 ******** 2026-04-09 01:17:00.945043 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.945047 | orchestrator | 2026-04-09 01:17:00.945052 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-09 01:17:00.945057 | orchestrator | Thursday 09 April 2026 01:16:55 +0000 (0:00:00.264) 0:00:11.013 ******** 2026-04-09 01:17:00.945061 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:17:00.945066 | orchestrator | 2026-04-09 01:17:00.945071 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-09 01:17:00.945075 | orchestrator | Thursday 09 April 2026 01:16:56 +0000 (0:00:01.354) 0:00:12.367 ******** 2026-04-09 01:17:00.945080 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.945086 | orchestrator | 2026-04-09 01:17:00.945091 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-09 01:17:00.945095 | orchestrator | Thursday 09 April 2026 01:16:57 +0000 (0:00:00.302) 0:00:12.670 ******** 2026-04-09 01:17:00.945099 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.945103 | orchestrator | 2026-04-09 01:17:00.945107 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-09 01:17:00.945110 | orchestrator | Thursday 09 April 2026 01:16:57 +0000 (0:00:00.141) 0:00:12.812 ******** 2026-04-09 01:17:00.945114 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:00.945118 | orchestrator | 2026-04-09 01:17:00.945122 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-09 01:17:00.945126 | orchestrator | Thursday 09 April 2026 01:16:57 +0000 (0:00:00.159) 0:00:12.972 ******** 2026-04-09 01:17:00.945130 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.945134 | orchestrator | 2026-04-09 01:17:00.945138 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-09 01:17:00.945141 | orchestrator | Thursday 09 April 2026 01:16:57 +0000 (0:00:00.123) 0:00:13.095 ******** 2026-04-09 01:17:00.945145 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.945149 | orchestrator | 2026-04-09 01:17:00.945153 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 01:17:00.945156 | orchestrator | Thursday 09 April 2026 01:16:57 +0000 (0:00:00.134) 0:00:13.229 ******** 2026-04-09 01:17:00.945160 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:00.945164 | orchestrator | 2026-04-09 01:17:00.945168 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 01:17:00.945172 | orchestrator | Thursday 09 April 2026 01:16:58 +0000 (0:00:00.275) 0:00:13.504 ******** 2026-04-09 01:17:00.945179 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:00.945183 | orchestrator | 2026-04-09 01:17:00.945190 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:17:00.945194 | orchestrator | Thursday 09 April 2026 01:16:58 +0000 (0:00:00.256) 0:00:13.761 ******** 2026-04-09 01:17:00.945198 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:00.945202 | orchestrator | 2026-04-09 01:17:00.945206 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:17:00.945209 | orchestrator | Thursday 09 April 2026 01:17:00 +0000 (0:00:01.747) 0:00:15.508 ******** 2026-04-09 01:17:00.945213 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:00.945217 | orchestrator | 2026-04-09 01:17:00.945221 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:17:00.945224 | orchestrator | Thursday 09 April 2026 01:17:00 +0000 (0:00:00.276) 0:00:15.785 ******** 2026-04-09 01:17:00.945228 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:00.945232 | orchestrator | 2026-04-09 01:17:00.945238 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:03.150139 | orchestrator | Thursday 09 April 2026 01:17:00 +0000 (0:00:00.623) 0:00:16.408 ******** 2026-04-09 01:17:03.150188 | orchestrator | 2026-04-09 01:17:03.150194 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:03.150199 | orchestrator | Thursday 09 April 2026 01:17:01 +0000 (0:00:00.070) 0:00:16.479 ******** 2026-04-09 01:17:03.150203 | orchestrator | 2026-04-09 01:17:03.150207 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:03.150211 | orchestrator | Thursday 09 April 2026 01:17:01 +0000 (0:00:00.067) 0:00:16.547 ******** 2026-04-09 01:17:03.150215 | orchestrator | 2026-04-09 01:17:03.150219 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 01:17:03.150223 | orchestrator | Thursday 09 April 2026 01:17:01 +0000 (0:00:00.071) 0:00:16.618 ******** 2026-04-09 01:17:03.150227 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:03.150231 | orchestrator | 2026-04-09 01:17:03.150234 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:17:03.150238 | orchestrator | Thursday 09 April 2026 01:17:02 +0000 (0:00:01.275) 0:00:17.894 ******** 2026-04-09 01:17:03.150242 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-09 01:17:03.150246 | orchestrator |  "msg": [ 2026-04-09 01:17:03.150250 | orchestrator |  "Validator run completed.", 2026-04-09 01:17:03.150254 | orchestrator |  "You can find the report file here:", 2026-04-09 01:17:03.150258 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-09T01:16:45+00:00-report.json", 2026-04-09 01:17:03.150262 | orchestrator |  "on the following host:", 2026-04-09 01:17:03.150266 | orchestrator |  "testbed-manager" 2026-04-09 01:17:03.150270 | orchestrator |  ] 2026-04-09 01:17:03.150274 | orchestrator | } 2026-04-09 01:17:03.150278 | orchestrator | 2026-04-09 01:17:03.150282 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:17:03.150287 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-09 01:17:03.150291 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:17:03.150295 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:17:03.150299 | orchestrator | 2026-04-09 01:17:03.150303 | orchestrator | 2026-04-09 01:17:03.150306 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:17:03.150310 | orchestrator | Thursday 09 April 2026 01:17:02 +0000 (0:00:00.428) 0:00:18.323 ******** 2026-04-09 01:17:03.150326 | orchestrator | =============================================================================== 2026-04-09 01:17:03.150330 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.75s 2026-04-09 01:17:03.150334 | orchestrator | Aggregate test results step one ----------------------------------------- 1.75s 2026-04-09 01:17:03.150338 | orchestrator | Get container info ------------------------------------------------------ 1.62s 2026-04-09 01:17:03.150341 | orchestrator | Gather status data ------------------------------------------------------ 1.35s 2026-04-09 01:17:03.150345 | orchestrator | Write report file ------------------------------------------------------- 1.28s 2026-04-09 01:17:03.150349 | orchestrator | Get timestamp for report file ------------------------------------------- 0.99s 2026-04-09 01:17:03.150353 | orchestrator | Create report output directory ------------------------------------------ 0.72s 2026-04-09 01:17:03.150357 | orchestrator | Aggregate test results step three --------------------------------------- 0.62s 2026-04-09 01:17:03.150360 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.44s 2026-04-09 01:17:03.150364 | orchestrator | Print report file information ------------------------------------------- 0.43s 2026-04-09 01:17:03.150368 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2026-04-09 01:17:03.150372 | orchestrator | Set quorum test data ---------------------------------------------------- 0.33s 2026-04-09 01:17:03.150375 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.33s 2026-04-09 01:17:03.150379 | orchestrator | Set test result to passed if container is existing ---------------------- 0.31s 2026-04-09 01:17:03.150383 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-04-09 01:17:03.150387 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2026-04-09 01:17:03.150390 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.30s 2026-04-09 01:17:03.150394 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2026-04-09 01:17:03.150443 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-04-09 01:17:03.150448 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-04-09 01:17:03.327948 | orchestrator | + osism validate ceph-mgrs 2026-04-09 01:17:31.839779 | orchestrator | 2026-04-09 01:17:31.839874 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-09 01:17:31.839882 | orchestrator | 2026-04-09 01:17:31.839887 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 01:17:31.839892 | orchestrator | Thursday 09 April 2026 01:17:17 +0000 (0:00:00.540) 0:00:00.540 ******** 2026-04-09 01:17:31.839897 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:31.839902 | orchestrator | 2026-04-09 01:17:31.839906 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 01:17:31.839910 | orchestrator | Thursday 09 April 2026 01:17:19 +0000 (0:00:01.033) 0:00:01.574 ******** 2026-04-09 01:17:31.839914 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:31.839918 | orchestrator | 2026-04-09 01:17:31.839922 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 01:17:31.839927 | orchestrator | Thursday 09 April 2026 01:17:19 +0000 (0:00:00.701) 0:00:02.275 ******** 2026-04-09 01:17:31.839931 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:31.839936 | orchestrator | 2026-04-09 01:17:31.839939 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-09 01:17:31.839949 | orchestrator | Thursday 09 April 2026 01:17:19 +0000 (0:00:00.136) 0:00:02.412 ******** 2026-04-09 01:17:31.839953 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:31.839957 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:31.839961 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:31.839965 | orchestrator | 2026-04-09 01:17:31.839968 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-09 01:17:31.839972 | orchestrator | Thursday 09 April 2026 01:17:20 +0000 (0:00:00.272) 0:00:02.684 ******** 2026-04-09 01:17:31.839987 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:31.839991 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:31.839995 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:31.839998 | orchestrator | 2026-04-09 01:17:31.840002 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-09 01:17:31.840006 | orchestrator | Thursday 09 April 2026 01:17:21 +0000 (0:00:01.524) 0:00:04.209 ******** 2026-04-09 01:17:31.840010 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:31.840014 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:17:31.840018 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:17:31.840021 | orchestrator | 2026-04-09 01:17:31.840027 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-09 01:17:31.840031 | orchestrator | Thursday 09 April 2026 01:17:21 +0000 (0:00:00.283) 0:00:04.492 ******** 2026-04-09 01:17:31.840035 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:31.840039 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:31.840042 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:31.840046 | orchestrator | 2026-04-09 01:17:31.840050 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:17:31.840054 | orchestrator | Thursday 09 April 2026 01:17:22 +0000 (0:00:00.308) 0:00:04.801 ******** 2026-04-09 01:17:31.840057 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:31.840061 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:31.840065 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:31.840069 | orchestrator | 2026-04-09 01:17:31.840073 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-09 01:17:31.840076 | orchestrator | Thursday 09 April 2026 01:17:22 +0000 (0:00:00.301) 0:00:05.102 ******** 2026-04-09 01:17:31.840080 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:31.840084 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:17:31.840088 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:17:31.840092 | orchestrator | 2026-04-09 01:17:31.840095 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-09 01:17:31.840099 | orchestrator | Thursday 09 April 2026 01:17:23 +0000 (0:00:00.460) 0:00:05.562 ******** 2026-04-09 01:17:31.840103 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:31.840107 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:17:31.840110 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:17:31.840114 | orchestrator | 2026-04-09 01:17:31.840118 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:17:31.840122 | orchestrator | Thursday 09 April 2026 01:17:23 +0000 (0:00:00.291) 0:00:05.854 ******** 2026-04-09 01:17:31.840126 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:31.840129 | orchestrator | 2026-04-09 01:17:31.840133 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:17:31.840137 | orchestrator | Thursday 09 April 2026 01:17:23 +0000 (0:00:00.242) 0:00:06.096 ******** 2026-04-09 01:17:31.840141 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:31.840145 | orchestrator | 2026-04-09 01:17:31.840148 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:17:31.840152 | orchestrator | Thursday 09 April 2026 01:17:23 +0000 (0:00:00.241) 0:00:06.338 ******** 2026-04-09 01:17:31.840156 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:31.840160 | orchestrator | 2026-04-09 01:17:31.840164 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:31.840167 | orchestrator | Thursday 09 April 2026 01:17:24 +0000 (0:00:00.251) 0:00:06.590 ******** 2026-04-09 01:17:31.840171 | orchestrator | 2026-04-09 01:17:31.840175 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:31.840179 | orchestrator | Thursday 09 April 2026 01:17:24 +0000 (0:00:00.071) 0:00:06.661 ******** 2026-04-09 01:17:31.840182 | orchestrator | 2026-04-09 01:17:31.840186 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:31.840190 | orchestrator | Thursday 09 April 2026 01:17:24 +0000 (0:00:00.068) 0:00:06.730 ******** 2026-04-09 01:17:31.840196 | orchestrator | 2026-04-09 01:17:31.840200 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:17:31.840204 | orchestrator | Thursday 09 April 2026 01:17:24 +0000 (0:00:00.220) 0:00:06.950 ******** 2026-04-09 01:17:31.840208 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:31.840212 | orchestrator | 2026-04-09 01:17:31.840215 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-09 01:17:31.840219 | orchestrator | Thursday 09 April 2026 01:17:24 +0000 (0:00:00.247) 0:00:07.198 ******** 2026-04-09 01:17:31.840223 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:31.840227 | orchestrator | 2026-04-09 01:17:31.840239 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-09 01:17:31.840243 | orchestrator | Thursday 09 April 2026 01:17:24 +0000 (0:00:00.247) 0:00:07.445 ******** 2026-04-09 01:17:31.840247 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:31.840251 | orchestrator | 2026-04-09 01:17:31.840254 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-09 01:17:31.840258 | orchestrator | Thursday 09 April 2026 01:17:25 +0000 (0:00:00.120) 0:00:07.566 ******** 2026-04-09 01:17:31.840262 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:17:31.840266 | orchestrator | 2026-04-09 01:17:31.840269 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-09 01:17:31.840273 | orchestrator | Thursday 09 April 2026 01:17:26 +0000 (0:00:01.567) 0:00:09.133 ******** 2026-04-09 01:17:31.840277 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:31.840281 | orchestrator | 2026-04-09 01:17:31.840285 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-09 01:17:31.840289 | orchestrator | Thursday 09 April 2026 01:17:26 +0000 (0:00:00.242) 0:00:09.376 ******** 2026-04-09 01:17:31.840292 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:31.840296 | orchestrator | 2026-04-09 01:17:31.840300 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-09 01:17:31.840304 | orchestrator | Thursday 09 April 2026 01:17:27 +0000 (0:00:00.281) 0:00:09.658 ******** 2026-04-09 01:17:31.840307 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:31.840311 | orchestrator | 2026-04-09 01:17:31.840315 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-09 01:17:31.840319 | orchestrator | Thursday 09 April 2026 01:17:27 +0000 (0:00:00.128) 0:00:09.786 ******** 2026-04-09 01:17:31.840323 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:17:31.840326 | orchestrator | 2026-04-09 01:17:31.840330 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 01:17:31.840334 | orchestrator | Thursday 09 April 2026 01:17:27 +0000 (0:00:00.148) 0:00:09.935 ******** 2026-04-09 01:17:31.840338 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:31.840341 | orchestrator | 2026-04-09 01:17:31.840345 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 01:17:31.840349 | orchestrator | Thursday 09 April 2026 01:17:27 +0000 (0:00:00.283) 0:00:10.219 ******** 2026-04-09 01:17:31.840356 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:17:31.840361 | orchestrator | 2026-04-09 01:17:31.840365 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:17:31.840369 | orchestrator | Thursday 09 April 2026 01:17:27 +0000 (0:00:00.242) 0:00:10.461 ******** 2026-04-09 01:17:31.840374 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:31.840378 | orchestrator | 2026-04-09 01:17:31.840382 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:17:31.840387 | orchestrator | Thursday 09 April 2026 01:17:29 +0000 (0:00:01.508) 0:00:11.970 ******** 2026-04-09 01:17:31.840391 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:31.840395 | orchestrator | 2026-04-09 01:17:31.840399 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:17:31.840403 | orchestrator | Thursday 09 April 2026 01:17:29 +0000 (0:00:00.269) 0:00:12.239 ******** 2026-04-09 01:17:31.840411 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:31.840415 | orchestrator | 2026-04-09 01:17:31.840420 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:31.840425 | orchestrator | Thursday 09 April 2026 01:17:29 +0000 (0:00:00.253) 0:00:12.493 ******** 2026-04-09 01:17:31.840429 | orchestrator | 2026-04-09 01:17:31.840433 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:31.840438 | orchestrator | Thursday 09 April 2026 01:17:30 +0000 (0:00:00.070) 0:00:12.563 ******** 2026-04-09 01:17:31.840442 | orchestrator | 2026-04-09 01:17:31.840447 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:17:31.840451 | orchestrator | Thursday 09 April 2026 01:17:30 +0000 (0:00:00.069) 0:00:12.633 ******** 2026-04-09 01:17:31.840456 | orchestrator | 2026-04-09 01:17:31.840460 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 01:17:31.840463 | orchestrator | Thursday 09 April 2026 01:17:30 +0000 (0:00:00.073) 0:00:12.706 ******** 2026-04-09 01:17:31.840485 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:31.840492 | orchestrator | 2026-04-09 01:17:31.840498 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:17:31.840504 | orchestrator | Thursday 09 April 2026 01:17:31 +0000 (0:00:01.264) 0:00:13.971 ******** 2026-04-09 01:17:31.840510 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-09 01:17:31.840517 | orchestrator |  "msg": [ 2026-04-09 01:17:31.840522 | orchestrator |  "Validator run completed.", 2026-04-09 01:17:31.840526 | orchestrator |  "You can find the report file here:", 2026-04-09 01:17:31.840530 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-09T01:17:18+00:00-report.json", 2026-04-09 01:17:31.840535 | orchestrator |  "on the following host:", 2026-04-09 01:17:31.840539 | orchestrator |  "testbed-manager" 2026-04-09 01:17:31.840542 | orchestrator |  ] 2026-04-09 01:17:31.840546 | orchestrator | } 2026-04-09 01:17:31.840550 | orchestrator | 2026-04-09 01:17:31.840554 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:17:31.840559 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 01:17:31.840564 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:17:31.840571 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:17:32.147044 | orchestrator | 2026-04-09 01:17:32.147127 | orchestrator | 2026-04-09 01:17:32.147137 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:17:32.147147 | orchestrator | Thursday 09 April 2026 01:17:31 +0000 (0:00:00.418) 0:00:14.389 ******** 2026-04-09 01:17:32.147153 | orchestrator | =============================================================================== 2026-04-09 01:17:32.147159 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.57s 2026-04-09 01:17:32.147165 | orchestrator | Get container info ------------------------------------------------------ 1.52s 2026-04-09 01:17:32.147171 | orchestrator | Aggregate test results step one ----------------------------------------- 1.51s 2026-04-09 01:17:32.147178 | orchestrator | Write report file ------------------------------------------------------- 1.26s 2026-04-09 01:17:32.147184 | orchestrator | Get timestamp for report file ------------------------------------------- 1.03s 2026-04-09 01:17:32.147190 | orchestrator | Create report output directory ------------------------------------------ 0.70s 2026-04-09 01:17:32.147198 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.46s 2026-04-09 01:17:32.147205 | orchestrator | Print report file information ------------------------------------------- 0.42s 2026-04-09 01:17:32.147236 | orchestrator | Flush handlers ---------------------------------------------------------- 0.36s 2026-04-09 01:17:32.147244 | orchestrator | Set test result to passed if container is existing ---------------------- 0.31s 2026-04-09 01:17:32.147251 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-04-09 01:17:32.147258 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.29s 2026-04-09 01:17:32.147265 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-04-09 01:17:32.147272 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2026-04-09 01:17:32.147280 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.28s 2026-04-09 01:17:32.147287 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2026-04-09 01:17:32.147295 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-04-09 01:17:32.147303 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2026-04-09 01:17:32.147310 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2026-04-09 01:17:32.147317 | orchestrator | Fail due to missing containers ------------------------------------------ 0.25s 2026-04-09 01:17:32.321948 | orchestrator | + osism validate ceph-osds 2026-04-09 01:17:51.104750 | orchestrator | 2026-04-09 01:17:51.104843 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-09 01:17:51.104854 | orchestrator | 2026-04-09 01:17:51.104862 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-09 01:17:51.104867 | orchestrator | Thursday 09 April 2026 01:17:47 +0000 (0:00:00.487) 0:00:00.487 ******** 2026-04-09 01:17:51.104871 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:51.104876 | orchestrator | 2026-04-09 01:17:51.104880 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 01:17:51.104884 | orchestrator | Thursday 09 April 2026 01:17:48 +0000 (0:00:00.975) 0:00:01.463 ******** 2026-04-09 01:17:51.104889 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:51.104893 | orchestrator | 2026-04-09 01:17:51.104896 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-09 01:17:51.104900 | orchestrator | Thursday 09 April 2026 01:17:48 +0000 (0:00:00.242) 0:00:01.705 ******** 2026-04-09 01:17:51.104904 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:17:51.104908 | orchestrator | 2026-04-09 01:17:51.104912 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-09 01:17:51.104916 | orchestrator | Thursday 09 April 2026 01:17:49 +0000 (0:00:00.671) 0:00:02.377 ******** 2026-04-09 01:17:51.104919 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:17:51.104924 | orchestrator | 2026-04-09 01:17:51.104928 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-09 01:17:51.104932 | orchestrator | Thursday 09 April 2026 01:17:49 +0000 (0:00:00.114) 0:00:02.491 ******** 2026-04-09 01:17:51.104936 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:17:51.104940 | orchestrator | 2026-04-09 01:17:51.104944 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-09 01:17:51.104948 | orchestrator | Thursday 09 April 2026 01:17:49 +0000 (0:00:00.110) 0:00:02.602 ******** 2026-04-09 01:17:51.104951 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:17:51.104955 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:17:51.104959 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:17:51.104963 | orchestrator | 2026-04-09 01:17:51.104966 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-09 01:17:51.104970 | orchestrator | Thursday 09 April 2026 01:17:49 +0000 (0:00:00.433) 0:00:03.035 ******** 2026-04-09 01:17:51.104974 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:17:51.104978 | orchestrator | 2026-04-09 01:17:51.104982 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-09 01:17:51.104998 | orchestrator | Thursday 09 April 2026 01:17:49 +0000 (0:00:00.166) 0:00:03.202 ******** 2026-04-09 01:17:51.105002 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:17:51.105006 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:17:51.105010 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:17:51.105013 | orchestrator | 2026-04-09 01:17:51.105017 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-09 01:17:51.105021 | orchestrator | Thursday 09 April 2026 01:17:50 +0000 (0:00:00.337) 0:00:03.540 ******** 2026-04-09 01:17:51.105025 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:17:51.105028 | orchestrator | 2026-04-09 01:17:51.105039 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:17:51.105043 | orchestrator | Thursday 09 April 2026 01:17:50 +0000 (0:00:00.356) 0:00:03.897 ******** 2026-04-09 01:17:51.105047 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:17:51.105051 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:17:51.105055 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:17:51.105059 | orchestrator | 2026-04-09 01:17:51.105063 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-09 01:17:51.105066 | orchestrator | Thursday 09 April 2026 01:17:50 +0000 (0:00:00.277) 0:00:04.175 ******** 2026-04-09 01:17:51.105072 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9936e481e490028b49ec39dd4e8cc49962cc3a4577c7a7914403bf11b0b69853', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-09 01:17:51.105079 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6177a96d36bd9fba4a8b90ab4922d4a0546f27fe1714a1de182984704cf8723f', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-09 01:17:51.105084 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c11e6e989ceb67c41e29e1b76f192729391cc1deb96b9c4732d928fa49c93bfc', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-09 01:17:51.105090 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5b40306532eb59214f316a927dc4167e4c7e6b0e4708ddb6164a678ddc21f297', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-09 01:17:51.105102 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd09f8d04e73f9eb7af594e967bcdb7d6b7b74126128c33bbfbe31cf5406d7ecb', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-04-09 01:17:51.105122 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4491e1ae24beddfa024deae94a0c33c8b54c814e4c4748ec1c640b387d84dab5', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-09 01:17:51.105132 | orchestrator | skipping: [testbed-node-3] => (item={'id': '11d57859f2993ac673a88d7eadd202bad39cbe17d844147b6b19dfe45a967c74', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-09 01:17:51.105138 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e294d89d59aa2678381c866a6f40eb45688c8eb8811c506a9feeedf0d11293a3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-09 01:17:51.105144 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c4ccf231ab1612af86f755999e1dd5f0bb5ed54c1463fd412d520903079a618b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-09 01:17:51.105150 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3d0150ae3a5da6fee87d9cf3636cc944fbd3830b5fa56c33a83b97933496472a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-09 01:17:51.105161 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c6bf978d41ddcb333b277b7b7f4f9a6dc687aec0dff5af2f922bd37699aa00d6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-09 01:17:51.105168 | orchestrator | ok: [testbed-node-3] => (item={'id': '4c1a6079f139780f5d82675b23609776388410222aad521a26c2e130f90e8f60', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-09 01:17:51.105174 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0aeb07d437a850a862c8f987697970d04c3c88bba9236037e46f3b221f78fc86', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-09 01:17:51.105180 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8fa79f852f7e9752ddc63e335d9ac94d934610dcd55168d567fe3654ac9c4a9f', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-09 01:17:51.105186 | orchestrator | skipping: [testbed-node-3] => (item={'id': '738fd1ebc42d13d4977872fab8223be8a36ef72bdfa5c97ca1bfb4ce9bfd95ef', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-09 01:17:51.105192 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e8bfc726ebe1e16940e0542653cbe14ec6653d4b1168735c4467948679a6dba4', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-09 01:17:51.105198 | orchestrator | skipping: [testbed-node-3] => (item={'id': '29e0dbfbcf742beca4e56bb9d68f66f476164eb8dcdec54f7e939db100c61085', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-09 01:17:51.105204 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b4835a6f2c2cb44d86c6b3e988ab6c3ea0cca9f9ae3464899f860aac552fb4cf', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-09 01:17:51.105210 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd258af0473219c36a0be2d7073f54d4b7b515fa52971e2c8079e0405973b4515', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-09 01:17:51.105216 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0a1ff076d97d23ce7b64685422cb63a614d40756917607a0bdd26c1d5e92ac2d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-09 01:17:51.105227 | orchestrator | skipping: [testbed-node-4] => (item={'id': '53289653226dd810f6c03f5c94c8c25a6e2cc91e6c7a10f2f31d8c91f08bfd22', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-09 01:17:51.105241 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2d6d8357c1b3ee829c9fe10a35c319f6061209f207435365a1aff9c7f55625c4', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-09 01:17:51.320762 | orchestrator | skipping: [testbed-node-4] => (item={'id': '46af41436ffc413787056d83e2cf033504d072c3bb8e3c41a26f3879667058b3', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-04-09 01:17:51.320833 | orchestrator | skipping: [testbed-node-4] => (item={'id': '83ad083c45dd351fee78d1ebbc1b278fdc9cc49e996779f1dbbc10dd6f554466', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-09 01:17:51.320860 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c74f005a0d6783583a84e3b8fcc68807265447c07453e1811003c50de6cb8d52', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-09 01:17:51.320868 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b73f74a39d1b14a3688dbe9e71ecaa25606a39897a1de5225e90b5786f358f7c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-09 01:17:51.320875 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dec26b08d674f0a87e242405014449ce5a55f1899885765930b7ce3081fdfa98', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-09 01:17:51.320882 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b425bb8ad21bd3b532e4011668edbc342d95bc145eb83698a8dab5972cb1fcd6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-09 01:17:51.320895 | orchestrator | ok: [testbed-node-4] => (item={'id': '3578a116a5cf61ec6c0f75d8de8d0d6cb1a5302987408a62e8ae06aabbe4cc26', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-09 01:17:51.320904 | orchestrator | ok: [testbed-node-4] => (item={'id': '15b1337014a9c672a5f0d7613d3eb1f3016edf8d71419f8f7f84dd5ea18dedaf', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-09 01:17:51.320920 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'df8a65646d9aea86365d5ac241f83587a560a05c848a1a72fad383f1b4224f7e', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-09 01:17:51.320927 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5d6e5f98fc7b2e295f904d1a81a23097a42dfe49df020a86ccf62fda03242908', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-09 01:17:51.320933 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5aab2032bd4c78d689e5e6c455646fb763b868adcf4d88ddea523d24c33c8a2a', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-09 01:17:51.320941 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'de00502332cc113406dc35e9512d4950125a92a388befbd659afc212dec459be', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-09 01:17:51.320948 | orchestrator | skipping: [testbed-node-4] => (item={'id': '071fd5c893ef70785f9a935488aa99948a9983960707197ff2a8f2ccbe8fab15', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-09 01:17:51.320955 | orchestrator | skipping: [testbed-node-4] => (item={'id': '89761de833da7411e677592a0ae9b3da90421b011ce901ab2df9bd68cacc30de', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-09 01:17:51.320962 | orchestrator | skipping: [testbed-node-5] => (item={'id': '984cbd4ba0694d745c1057863c98116647dde4dae9023cce1407bef2587bc578', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-09 01:17:51.320984 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eb29f691e95717494ced45610394527afe2c8fb813d29fa3513efb590c760c47', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-09 01:17:51.320998 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e219e7847ea9a403fa43668b6714c560fda6219dbdd64537732aeed15d83a7a5', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-09 01:17:51.321004 | orchestrator | skipping: [testbed-node-5] => (item={'id': '90f9d663595c3290c5e14a3c547111938a863c35744a8ca452f53d3e05f8b5fc', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-09 01:17:51.321010 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8da1ece9d8e921d159830b4d5ba5773840986f00581c3c433c0d4fcddbdcbbf5', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-04-09 01:17:51.321017 | orchestrator | skipping: [testbed-node-5] => (item={'id': '354f7d6907033e6ef9c50a89f58bc161cc1681eb022fa4218fa2bea50bb1a44f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-09 01:17:51.321023 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f99021b744eba3f351e6254639cc82ab7309ffb7b0614af89ea88f5ee31f4c9c', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-09 01:17:51.321029 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fbad332e71d3ac0f09f1b677e5cdb5f035cbb51aa579191f3103e619473d64f5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-09 01:17:51.321035 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6b4a425aacd60f7caca41735a2fe11cad1a1914201a5e222bdd5de58a5d2b8c3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-09 01:17:51.321041 | orchestrator | skipping: [testbed-node-5] => (item={'id': '263bff441b7122684ac619960c0b6a375447f78caa9ab97219075b2447553d47', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-09 01:17:51.321063 | orchestrator | ok: [testbed-node-5] => (item={'id': '12a3943a2f9e2dc930c7af6553d8d9a9f5eb9c7d17ff105870db1febe719113d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-09 01:17:51.321069 | orchestrator | ok: [testbed-node-5] => (item={'id': '7e79db44465f9bdbc370a5e5e7a0742084ef3deeece7320a3c13940146c1e9d3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-09 01:17:51.321075 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0b5d0bffb8b177646151f206351e46871e4411b1c0be0a1b94a3f5b7bb44dc32', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-09 01:17:51.321081 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e9ed9371a1f08cbc1fe1986f9127829e7768d9d9395cd5c13b62919ae900f222', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-09 01:17:51.321087 | orchestrator | skipping: [testbed-node-5] => (item={'id': '558b81718abefd46b22b8e59aef8af412821f313fbbd081d1c3ee5389d71a5f8', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-09 01:17:51.321096 | orchestrator | skipping: [testbed-node-5] => (item={'id': '91a97dfa35edc820ec089716cf2a838e54b8c15c5c2235726d07cb90169bb00a', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-09 01:17:51.321107 | orchestrator | skipping: [testbed-node-5] => (item={'id': '73b931d57b0449772787fa8c85df24071507fc181b3c5bdb392f908073fd958e', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-09 01:17:51.321118 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e5c1650ff20a505e5ab1f2a5feac8eb3c8d393473125551669ac49073ec8c588', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-09 01:18:03.908911 | orchestrator | 2026-04-09 01:18:03.908993 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-09 01:18:03.909002 | orchestrator | Thursday 09 April 2026 01:17:51 +0000 (0:00:00.645) 0:00:04.820 ******** 2026-04-09 01:18:03.909008 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909014 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:03.909020 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:03.909025 | orchestrator | 2026-04-09 01:18:03.909031 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-09 01:18:03.909036 | orchestrator | Thursday 09 April 2026 01:17:51 +0000 (0:00:00.315) 0:00:05.136 ******** 2026-04-09 01:18:03.909042 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.909048 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:03.909053 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:03.909059 | orchestrator | 2026-04-09 01:18:03.909064 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-09 01:18:03.909069 | orchestrator | Thursday 09 April 2026 01:17:52 +0000 (0:00:00.297) 0:00:05.433 ******** 2026-04-09 01:18:03.909075 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909080 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:03.909085 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:03.909090 | orchestrator | 2026-04-09 01:18:03.909095 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:18:03.909101 | orchestrator | Thursday 09 April 2026 01:17:52 +0000 (0:00:00.295) 0:00:05.729 ******** 2026-04-09 01:18:03.909106 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909111 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:03.909116 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:03.909121 | orchestrator | 2026-04-09 01:18:03.909126 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-09 01:18:03.909132 | orchestrator | Thursday 09 April 2026 01:17:52 +0000 (0:00:00.425) 0:00:06.154 ******** 2026-04-09 01:18:03.909137 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-09 01:18:03.909143 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-09 01:18:03.909148 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.909154 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-09 01:18:03.909159 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-09 01:18:03.909164 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:03.909169 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-09 01:18:03.909175 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-09 01:18:03.909180 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:03.909185 | orchestrator | 2026-04-09 01:18:03.909190 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-09 01:18:03.909195 | orchestrator | Thursday 09 April 2026 01:17:53 +0000 (0:00:00.295) 0:00:06.450 ******** 2026-04-09 01:18:03.909200 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909206 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:03.909228 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:03.909234 | orchestrator | 2026-04-09 01:18:03.909239 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-09 01:18:03.909244 | orchestrator | Thursday 09 April 2026 01:17:53 +0000 (0:00:00.291) 0:00:06.742 ******** 2026-04-09 01:18:03.909249 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.909254 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:03.909260 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:03.909265 | orchestrator | 2026-04-09 01:18:03.909270 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-09 01:18:03.909275 | orchestrator | Thursday 09 April 2026 01:17:53 +0000 (0:00:00.291) 0:00:07.033 ******** 2026-04-09 01:18:03.909280 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.909285 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:03.909290 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:03.909295 | orchestrator | 2026-04-09 01:18:03.909300 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-09 01:18:03.909305 | orchestrator | Thursday 09 April 2026 01:17:54 +0000 (0:00:00.441) 0:00:07.474 ******** 2026-04-09 01:18:03.909310 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909316 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:03.909321 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:03.909326 | orchestrator | 2026-04-09 01:18:03.909331 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:18:03.909336 | orchestrator | Thursday 09 April 2026 01:17:54 +0000 (0:00:00.283) 0:00:07.758 ******** 2026-04-09 01:18:03.909341 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.909347 | orchestrator | 2026-04-09 01:18:03.909352 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:18:03.909357 | orchestrator | Thursday 09 April 2026 01:17:54 +0000 (0:00:00.258) 0:00:08.017 ******** 2026-04-09 01:18:03.909372 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.909378 | orchestrator | 2026-04-09 01:18:03.909383 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:18:03.909388 | orchestrator | Thursday 09 April 2026 01:17:55 +0000 (0:00:00.259) 0:00:08.277 ******** 2026-04-09 01:18:03.909393 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.909398 | orchestrator | 2026-04-09 01:18:03.909403 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:03.909409 | orchestrator | Thursday 09 April 2026 01:17:55 +0000 (0:00:00.244) 0:00:08.522 ******** 2026-04-09 01:18:03.909414 | orchestrator | 2026-04-09 01:18:03.909419 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:03.909424 | orchestrator | Thursday 09 April 2026 01:17:55 +0000 (0:00:00.070) 0:00:08.592 ******** 2026-04-09 01:18:03.909429 | orchestrator | 2026-04-09 01:18:03.909434 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:03.909449 | orchestrator | Thursday 09 April 2026 01:17:55 +0000 (0:00:00.080) 0:00:08.673 ******** 2026-04-09 01:18:03.909455 | orchestrator | 2026-04-09 01:18:03.909460 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:18:03.909465 | orchestrator | Thursday 09 April 2026 01:17:55 +0000 (0:00:00.069) 0:00:08.742 ******** 2026-04-09 01:18:03.909470 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.909475 | orchestrator | 2026-04-09 01:18:03.909480 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-09 01:18:03.909486 | orchestrator | Thursday 09 April 2026 01:17:56 +0000 (0:00:00.620) 0:00:09.363 ******** 2026-04-09 01:18:03.909492 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.909498 | orchestrator | 2026-04-09 01:18:03.909504 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:18:03.909511 | orchestrator | Thursday 09 April 2026 01:17:56 +0000 (0:00:00.245) 0:00:09.608 ******** 2026-04-09 01:18:03.909517 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909523 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:03.909535 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:03.909638 | orchestrator | 2026-04-09 01:18:03.909645 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-09 01:18:03.909651 | orchestrator | Thursday 09 April 2026 01:17:56 +0000 (0:00:00.280) 0:00:09.889 ******** 2026-04-09 01:18:03.909657 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909663 | orchestrator | 2026-04-09 01:18:03.909669 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-09 01:18:03.909675 | orchestrator | Thursday 09 April 2026 01:17:56 +0000 (0:00:00.256) 0:00:10.146 ******** 2026-04-09 01:18:03.909681 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 01:18:03.909688 | orchestrator | 2026-04-09 01:18:03.909694 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-09 01:18:03.909700 | orchestrator | Thursday 09 April 2026 01:17:58 +0000 (0:00:02.080) 0:00:12.227 ******** 2026-04-09 01:18:03.909706 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909778 | orchestrator | 2026-04-09 01:18:03.909785 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-09 01:18:03.909791 | orchestrator | Thursday 09 April 2026 01:17:59 +0000 (0:00:00.123) 0:00:12.350 ******** 2026-04-09 01:18:03.909797 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909803 | orchestrator | 2026-04-09 01:18:03.909810 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-09 01:18:03.909816 | orchestrator | Thursday 09 April 2026 01:17:59 +0000 (0:00:00.292) 0:00:12.643 ******** 2026-04-09 01:18:03.909822 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.909829 | orchestrator | 2026-04-09 01:18:03.909835 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-09 01:18:03.909841 | orchestrator | Thursday 09 April 2026 01:17:59 +0000 (0:00:00.121) 0:00:12.764 ******** 2026-04-09 01:18:03.909848 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909854 | orchestrator | 2026-04-09 01:18:03.909860 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:18:03.909867 | orchestrator | Thursday 09 April 2026 01:17:59 +0000 (0:00:00.137) 0:00:12.902 ******** 2026-04-09 01:18:03.909873 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909878 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:03.909883 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:03.909888 | orchestrator | 2026-04-09 01:18:03.909893 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-09 01:18:03.909899 | orchestrator | Thursday 09 April 2026 01:18:00 +0000 (0:00:00.431) 0:00:13.333 ******** 2026-04-09 01:18:03.909904 | orchestrator | changed: [testbed-node-4] 2026-04-09 01:18:03.909909 | orchestrator | changed: [testbed-node-3] 2026-04-09 01:18:03.909914 | orchestrator | changed: [testbed-node-5] 2026-04-09 01:18:03.909919 | orchestrator | 2026-04-09 01:18:03.909924 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-09 01:18:03.909929 | orchestrator | Thursday 09 April 2026 01:18:01 +0000 (0:00:01.535) 0:00:14.869 ******** 2026-04-09 01:18:03.909934 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909939 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:03.909945 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:03.909950 | orchestrator | 2026-04-09 01:18:03.909958 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-09 01:18:03.909966 | orchestrator | Thursday 09 April 2026 01:18:01 +0000 (0:00:00.311) 0:00:15.180 ******** 2026-04-09 01:18:03.909975 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.909983 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:03.909991 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:03.909998 | orchestrator | 2026-04-09 01:18:03.910005 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-09 01:18:03.910068 | orchestrator | Thursday 09 April 2026 01:18:02 +0000 (0:00:00.459) 0:00:15.640 ******** 2026-04-09 01:18:03.910080 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.910088 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:03.910105 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:03.910113 | orchestrator | 2026-04-09 01:18:03.910120 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-09 01:18:03.910129 | orchestrator | Thursday 09 April 2026 01:18:02 +0000 (0:00:00.452) 0:00:16.092 ******** 2026-04-09 01:18:03.910144 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:03.910152 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:03.910161 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:03.910169 | orchestrator | 2026-04-09 01:18:03.910176 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-09 01:18:03.910182 | orchestrator | Thursday 09 April 2026 01:18:03 +0000 (0:00:00.324) 0:00:16.417 ******** 2026-04-09 01:18:03.910187 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.910192 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:03.910197 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:03.910202 | orchestrator | 2026-04-09 01:18:03.910207 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-09 01:18:03.910212 | orchestrator | Thursday 09 April 2026 01:18:03 +0000 (0:00:00.294) 0:00:16.711 ******** 2026-04-09 01:18:03.910217 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:03.910222 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:03.910227 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:03.910232 | orchestrator | 2026-04-09 01:18:03.910245 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-09 01:18:10.922974 | orchestrator | Thursday 09 April 2026 01:18:03 +0000 (0:00:00.459) 0:00:17.170 ******** 2026-04-09 01:18:10.923075 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:10.923087 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:10.923096 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:10.923104 | orchestrator | 2026-04-09 01:18:10.923112 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-09 01:18:10.923119 | orchestrator | Thursday 09 April 2026 01:18:04 +0000 (0:00:00.491) 0:00:17.661 ******** 2026-04-09 01:18:10.923126 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:10.923134 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:10.923140 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:10.923147 | orchestrator | 2026-04-09 01:18:10.923155 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-09 01:18:10.923162 | orchestrator | Thursday 09 April 2026 01:18:04 +0000 (0:00:00.472) 0:00:18.134 ******** 2026-04-09 01:18:10.923170 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:10.923177 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:10.923185 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:10.923192 | orchestrator | 2026-04-09 01:18:10.923200 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-09 01:18:10.923208 | orchestrator | Thursday 09 April 2026 01:18:05 +0000 (0:00:00.312) 0:00:18.447 ******** 2026-04-09 01:18:10.923215 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:10.923224 | orchestrator | skipping: [testbed-node-4] 2026-04-09 01:18:10.923231 | orchestrator | skipping: [testbed-node-5] 2026-04-09 01:18:10.923239 | orchestrator | 2026-04-09 01:18:10.923247 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-09 01:18:10.923255 | orchestrator | Thursday 09 April 2026 01:18:05 +0000 (0:00:00.441) 0:00:18.888 ******** 2026-04-09 01:18:10.923263 | orchestrator | ok: [testbed-node-3] 2026-04-09 01:18:10.923271 | orchestrator | ok: [testbed-node-4] 2026-04-09 01:18:10.923279 | orchestrator | ok: [testbed-node-5] 2026-04-09 01:18:10.923286 | orchestrator | 2026-04-09 01:18:10.923294 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-09 01:18:10.923302 | orchestrator | Thursday 09 April 2026 01:18:05 +0000 (0:00:00.332) 0:00:19.221 ******** 2026-04-09 01:18:10.923310 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:10.923318 | orchestrator | 2026-04-09 01:18:10.923327 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-09 01:18:10.923335 | orchestrator | Thursday 09 April 2026 01:18:06 +0000 (0:00:00.254) 0:00:19.475 ******** 2026-04-09 01:18:10.923369 | orchestrator | skipping: [testbed-node-3] 2026-04-09 01:18:10.923377 | orchestrator | 2026-04-09 01:18:10.923385 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-09 01:18:10.923393 | orchestrator | Thursday 09 April 2026 01:18:06 +0000 (0:00:00.266) 0:00:19.741 ******** 2026-04-09 01:18:10.923401 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:10.923409 | orchestrator | 2026-04-09 01:18:10.923417 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-09 01:18:10.923424 | orchestrator | Thursday 09 April 2026 01:18:08 +0000 (0:00:01.673) 0:00:21.414 ******** 2026-04-09 01:18:10.923432 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:10.923440 | orchestrator | 2026-04-09 01:18:10.923448 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-09 01:18:10.923456 | orchestrator | Thursday 09 April 2026 01:18:08 +0000 (0:00:00.249) 0:00:21.664 ******** 2026-04-09 01:18:10.923464 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:10.923471 | orchestrator | 2026-04-09 01:18:10.923478 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:10.923486 | orchestrator | Thursday 09 April 2026 01:18:08 +0000 (0:00:00.249) 0:00:21.914 ******** 2026-04-09 01:18:10.923493 | orchestrator | 2026-04-09 01:18:10.923500 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:10.923507 | orchestrator | Thursday 09 April 2026 01:18:08 +0000 (0:00:00.066) 0:00:21.981 ******** 2026-04-09 01:18:10.923514 | orchestrator | 2026-04-09 01:18:10.923521 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-09 01:18:10.923528 | orchestrator | Thursday 09 April 2026 01:18:08 +0000 (0:00:00.206) 0:00:22.187 ******** 2026-04-09 01:18:10.923537 | orchestrator | 2026-04-09 01:18:10.923544 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-09 01:18:10.923552 | orchestrator | Thursday 09 April 2026 01:18:08 +0000 (0:00:00.068) 0:00:22.256 ******** 2026-04-09 01:18:10.923611 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 01:18:10.923619 | orchestrator | 2026-04-09 01:18:10.923626 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-09 01:18:10.923634 | orchestrator | Thursday 09 April 2026 01:18:10 +0000 (0:00:01.262) 0:00:23.518 ******** 2026-04-09 01:18:10.923641 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-09 01:18:10.923648 | orchestrator |  "msg": [ 2026-04-09 01:18:10.923657 | orchestrator |  "Validator run completed.", 2026-04-09 01:18:10.923666 | orchestrator |  "You can find the report file here:", 2026-04-09 01:18:10.923674 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-09T01:17:48+00:00-report.json", 2026-04-09 01:18:10.923683 | orchestrator |  "on the following host:", 2026-04-09 01:18:10.923691 | orchestrator |  "testbed-manager" 2026-04-09 01:18:10.923699 | orchestrator |  ] 2026-04-09 01:18:10.923707 | orchestrator | } 2026-04-09 01:18:10.923715 | orchestrator | 2026-04-09 01:18:10.923722 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:18:10.923731 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 01:18:10.923741 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 01:18:10.923771 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 01:18:10.923779 | orchestrator | 2026-04-09 01:18:10.923786 | orchestrator | 2026-04-09 01:18:10.923794 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:18:10.923860 | orchestrator | Thursday 09 April 2026 01:18:10 +0000 (0:00:00.379) 0:00:23.897 ******** 2026-04-09 01:18:10.923868 | orchestrator | =============================================================================== 2026-04-09 01:18:10.923876 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.08s 2026-04-09 01:18:10.923883 | orchestrator | Aggregate test results step one ----------------------------------------- 1.67s 2026-04-09 01:18:10.923890 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.54s 2026-04-09 01:18:10.923897 | orchestrator | Write report file ------------------------------------------------------- 1.26s 2026-04-09 01:18:10.923904 | orchestrator | Get timestamp for report file ------------------------------------------- 0.98s 2026-04-09 01:18:10.923911 | orchestrator | Create report output directory ------------------------------------------ 0.67s 2026-04-09 01:18:10.923917 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.65s 2026-04-09 01:18:10.923924 | orchestrator | Print report file information ------------------------------------------- 0.62s 2026-04-09 01:18:10.923931 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-04-09 01:18:10.923938 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.47s 2026-04-09 01:18:10.923945 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.46s 2026-04-09 01:18:10.923951 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.46s 2026-04-09 01:18:10.923957 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.45s 2026-04-09 01:18:10.923965 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.44s 2026-04-09 01:18:10.923972 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.44s 2026-04-09 01:18:10.923978 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.43s 2026-04-09 01:18:10.923985 | orchestrator | Prepare test data ------------------------------------------------------- 0.43s 2026-04-09 01:18:10.923991 | orchestrator | Prepare test data ------------------------------------------------------- 0.43s 2026-04-09 01:18:10.923998 | orchestrator | Print report file information ------------------------------------------- 0.38s 2026-04-09 01:18:10.924005 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.36s 2026-04-09 01:18:11.118749 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-09 01:18:11.124066 | orchestrator | + set -e 2026-04-09 01:18:11.124150 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 01:18:11.124161 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 01:18:11.124168 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 01:18:11.124173 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 01:18:11.124180 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 01:18:11.124187 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 01:18:11.124194 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 01:18:11.124199 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:18:11.124206 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:18:11.124211 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 01:18:11.124217 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 01:18:11.124224 | orchestrator | ++ export ARA=false 2026-04-09 01:18:11.124230 | orchestrator | ++ ARA=false 2026-04-09 01:18:11.124235 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 01:18:11.124241 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 01:18:11.124247 | orchestrator | ++ export TEMPEST=true 2026-04-09 01:18:11.124253 | orchestrator | ++ TEMPEST=true 2026-04-09 01:18:11.124259 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 01:18:11.124286 | orchestrator | ++ IS_ZUUL=true 2026-04-09 01:18:11.124294 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 01:18:11.124300 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 01:18:11.124316 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 01:18:11.124335 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 01:18:11.124347 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 01:18:11.124354 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 01:18:11.124360 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 01:18:11.124388 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 01:18:11.124395 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 01:18:11.124426 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 01:18:11.124433 | orchestrator | + source /etc/os-release 2026-04-09 01:18:11.124439 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-09 01:18:11.124444 | orchestrator | ++ NAME=Ubuntu 2026-04-09 01:18:11.124450 | orchestrator | ++ VERSION_ID=24.04 2026-04-09 01:18:11.124456 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-09 01:18:11.124463 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-09 01:18:11.124467 | orchestrator | ++ ID=ubuntu 2026-04-09 01:18:11.124472 | orchestrator | ++ ID_LIKE=debian 2026-04-09 01:18:11.124476 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-09 01:18:11.124480 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-09 01:18:11.124484 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-09 01:18:11.124489 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-09 01:18:11.124494 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-09 01:18:11.124498 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-09 01:18:11.124502 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-09 01:18:11.124528 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-09 01:18:11.124534 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-09 01:18:11.157193 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-09 01:18:35.423793 | orchestrator | 2026-04-09 01:18:35.423881 | orchestrator | # Status of Elasticsearch 2026-04-09 01:18:35.423892 | orchestrator | 2026-04-09 01:18:35.423897 | orchestrator | + pushd /opt/configuration/contrib 2026-04-09 01:18:35.423904 | orchestrator | + echo 2026-04-09 01:18:35.423909 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-09 01:18:35.423912 | orchestrator | + echo 2026-04-09 01:18:35.423917 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-09 01:18:35.600425 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-09 01:18:35.600525 | orchestrator | 2026-04-09 01:18:35.600536 | orchestrator | # Status of MariaDB 2026-04-09 01:18:35.600545 | orchestrator | 2026-04-09 01:18:35.600555 | orchestrator | + echo 2026-04-09 01:18:35.600562 | orchestrator | + echo '# Status of MariaDB' 2026-04-09 01:18:35.600569 | orchestrator | + echo 2026-04-09 01:18:35.601464 | orchestrator | ++ semver latest 10.0.0-0 2026-04-09 01:18:35.657791 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 01:18:35.657891 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 01:18:35.657901 | orchestrator | + osism status database 2026-04-09 01:18:37.234396 | orchestrator | 2026-04-09 01:18:37 | ERROR  | Unable to get ansible vault password 2026-04-09 01:18:37.313727 | orchestrator | 2026-04-09 01:18:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:18:37.313786 | orchestrator | 2026-04-09 01:18:37 | ERROR  | Dropping encrypted entries 2026-04-09 01:18:37.313808 | orchestrator | 2026-04-09 01:18:37 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-09 01:18:37.313813 | orchestrator | 2026-04-09 01:18:37 | INFO  | Cluster Status: Primary 2026-04-09 01:18:37.313819 | orchestrator | 2026-04-09 01:18:37 | INFO  | Connected: ON 2026-04-09 01:18:37.313823 | orchestrator | 2026-04-09 01:18:37 | INFO  | Ready: ON 2026-04-09 01:18:37.313827 | orchestrator | 2026-04-09 01:18:37 | INFO  | Cluster Size: 3 2026-04-09 01:18:37.313831 | orchestrator | 2026-04-09 01:18:37 | INFO  | Local State: Synced 2026-04-09 01:18:37.313835 | orchestrator | 2026-04-09 01:18:37 | INFO  | Cluster State UUID: baed26d7-33ae-11f1-8a9c-5b3dfbccec39 2026-04-09 01:18:37.313839 | orchestrator | 2026-04-09 01:18:37 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-04-09 01:18:37.313844 | orchestrator | 2026-04-09 01:18:37 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-09 01:18:37.313871 | orchestrator | 2026-04-09 01:18:37 | INFO  | Local Node UUID: ec367f42-33ae-11f1-896e-2f8e6dacbcda 2026-04-09 01:18:37.313875 | orchestrator | 2026-04-09 01:18:37 | INFO  | Flow Control Paused: 0.00% 2026-04-09 01:18:37.313879 | orchestrator | 2026-04-09 01:18:37 | INFO  | Recv Queue Avg: 0 2026-04-09 01:18:37.313883 | orchestrator | 2026-04-09 01:18:37 | INFO  | Send Queue Avg: 0.000753352 2026-04-09 01:18:37.313887 | orchestrator | 2026-04-09 01:18:37 | INFO  | Transactions: 4394 local commits, 6579 replicated, 96 received 2026-04-09 01:18:37.313891 | orchestrator | 2026-04-09 01:18:37 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-09 01:18:37.313895 | orchestrator | 2026-04-09 01:18:37 | INFO  | MariaDB Uptime: 22 minutes, 17 seconds 2026-04-09 01:18:37.313899 | orchestrator | 2026-04-09 01:18:37 | INFO  | Threads: 134 connected, 1 running 2026-04-09 01:18:37.313903 | orchestrator | 2026-04-09 01:18:37 | INFO  | Queries: 209378 total, 0 slow 2026-04-09 01:18:37.313906 | orchestrator | 2026-04-09 01:18:37 | INFO  | Aborted Connects: 138 2026-04-09 01:18:37.313911 | orchestrator | 2026-04-09 01:18:37 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-09 01:18:37.479446 | orchestrator | 2026-04-09 01:18:37.479515 | orchestrator | # Status of Prometheus 2026-04-09 01:18:37.479521 | orchestrator | 2026-04-09 01:18:37.479526 | orchestrator | + echo 2026-04-09 01:18:37.479530 | orchestrator | + echo '# Status of Prometheus' 2026-04-09 01:18:37.479535 | orchestrator | + echo 2026-04-09 01:18:37.479539 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-09 01:18:37.528698 | orchestrator | Unauthorized 2026-04-09 01:18:37.532158 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-09 01:18:37.591990 | orchestrator | Unauthorized 2026-04-09 01:18:37.595280 | orchestrator | 2026-04-09 01:18:37.595358 | orchestrator | # Status of RabbitMQ 2026-04-09 01:18:37.595382 | orchestrator | 2026-04-09 01:18:37.595389 | orchestrator | + echo 2026-04-09 01:18:37.595395 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-09 01:18:37.595402 | orchestrator | + echo 2026-04-09 01:18:37.595833 | orchestrator | ++ semver latest 10.0.0-0 2026-04-09 01:18:37.653783 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-09 01:18:37.653853 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 01:18:37.653860 | orchestrator | + osism status messaging 2026-04-09 01:18:44.812295 | orchestrator | 2026-04-09 01:18:44 | ERROR  | Unable to get ansible vault password 2026-04-09 01:18:44.812393 | orchestrator | 2026-04-09 01:18:44 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:18:44.812404 | orchestrator | 2026-04-09 01:18:44 | ERROR  | Dropping encrypted entries 2026-04-09 01:18:44.844269 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-09 01:18:44.900052 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-04-09 01:18:44.900175 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-04-09 01:18:44.900187 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-09 01:18:44.900203 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-09 01:18:44.900212 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:18:44.900220 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:18:44.900291 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-09 01:18:44.900921 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Connections: 208, Channels: 207, Queues: 173 2026-04-09 01:18:44.901006 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Messages: 231 total, 231 ready, 0 unacked 2026-04-09 01:18:44.901515 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Message Rates: 7.0/s publish, 7.2/s deliver 2026-04-09 01:18:44.901544 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Disk Free: 57.2 GB (limit: 0.0 GB) 2026-04-09 01:18:44.901553 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-04-09 01:18:44.901559 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] File Descriptors: 110/1024 2026-04-09 01:18:44.901566 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-0] Sockets: 64/832 2026-04-09 01:18:44.901758 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-09 01:18:44.956778 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-04-09 01:18:44.956850 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-04-09 01:18:44.956856 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-09 01:18:44.956861 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-09 01:18:44.956874 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:18:44.956879 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:18:44.957119 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-09 01:18:44.957529 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Connections: 208, Channels: 207, Queues: 173 2026-04-09 01:18:44.957559 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Messages: 231 total, 231 ready, 0 unacked 2026-04-09 01:18:44.957568 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Message Rates: 7.0/s publish, 7.2/s deliver 2026-04-09 01:18:44.957810 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Disk Free: 57.4 GB (limit: 0.0 GB) 2026-04-09 01:18:44.957961 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-09 01:18:44.957975 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] File Descriptors: 125/1024 2026-04-09 01:18:44.957979 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-1] Sockets: 76/832 2026-04-09 01:18:44.958205 | orchestrator | 2026-04-09 01:18:44 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-09 01:18:45.011256 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-04-09 01:18:45.011397 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-04-09 01:18:45.011409 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-09 01:18:45.011425 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-09 01:18:45.011434 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:18:45.011700 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-09 01:18:45.011901 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-09 01:18:45.011916 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Connections: 208, Channels: 207, Queues: 173 2026-04-09 01:18:45.012308 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Messages: 231 total, 231 ready, 0 unacked 2026-04-09 01:18:45.013735 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Message Rates: 7.0/s publish, 7.2/s deliver 2026-04-09 01:18:45.013764 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Disk Free: 57.4 GB (limit: 0.0 GB) 2026-04-09 01:18:45.013769 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-09 01:18:45.013774 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] File Descriptors: 114/1024 2026-04-09 01:18:45.013778 | orchestrator | 2026-04-09 01:18:45 | INFO  | [testbed-node-2] Sockets: 68/832 2026-04-09 01:18:45.013783 | orchestrator | 2026-04-09 01:18:45 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-09 01:18:45.238100 | orchestrator | 2026-04-09 01:18:45.238198 | orchestrator | # Status of Redis 2026-04-09 01:18:45.238207 | orchestrator | 2026-04-09 01:18:45.238212 | orchestrator | + echo 2026-04-09 01:18:45.238218 | orchestrator | + echo '# Status of Redis' 2026-04-09 01:18:45.238225 | orchestrator | + echo 2026-04-09 01:18:45.238232 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-09 01:18:45.243837 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001680s;;;0.000000;10.000000 2026-04-09 01:18:45.243928 | orchestrator | + popd 2026-04-09 01:18:45.243942 | orchestrator | 2026-04-09 01:18:45.243952 | orchestrator | # Create backup of MariaDB database 2026-04-09 01:18:45.243962 | orchestrator | 2026-04-09 01:18:45.243971 | orchestrator | + echo 2026-04-09 01:18:45.243980 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-09 01:18:45.243988 | orchestrator | + echo 2026-04-09 01:18:45.243997 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-09 01:18:46.515961 | orchestrator | 2026-04-09 01:18:46 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-09 01:18:46.577320 | orchestrator | 2026-04-09 01:18:46 | INFO  | Task 442a50d9-2141-4643-ad90-dcbf4b025d8f (mariadb_backup) was prepared for execution. 2026-04-09 01:18:46.577412 | orchestrator | 2026-04-09 01:18:46 | INFO  | It takes a moment until task 442a50d9-2141-4643-ad90-dcbf4b025d8f (mariadb_backup) has been started and output is visible here. 2026-04-09 01:19:12.930289 | orchestrator | 2026-04-09 01:19:12.930380 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 01:19:12.930391 | orchestrator | 2026-04-09 01:19:12.930397 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 01:19:12.930404 | orchestrator | Thursday 09 April 2026 01:18:49 +0000 (0:00:00.231) 0:00:00.231 ******** 2026-04-09 01:19:12.930410 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:19:12.930417 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:19:12.930423 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:19:12.930428 | orchestrator | 2026-04-09 01:19:12.930434 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 01:19:12.930440 | orchestrator | Thursday 09 April 2026 01:18:49 +0000 (0:00:00.310) 0:00:00.542 ******** 2026-04-09 01:19:12.930446 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-09 01:19:12.930453 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-09 01:19:12.930459 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-09 01:19:12.930465 | orchestrator | 2026-04-09 01:19:12.930471 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-09 01:19:12.930477 | orchestrator | 2026-04-09 01:19:12.930503 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-09 01:19:12.930508 | orchestrator | Thursday 09 April 2026 01:18:50 +0000 (0:00:00.471) 0:00:01.013 ******** 2026-04-09 01:19:12.930512 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 01:19:12.930516 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 01:19:12.930520 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 01:19:12.930524 | orchestrator | 2026-04-09 01:19:12.930527 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 01:19:12.930531 | orchestrator | Thursday 09 April 2026 01:18:50 +0000 (0:00:00.398) 0:00:01.412 ******** 2026-04-09 01:19:12.930536 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 01:19:12.930541 | orchestrator | 2026-04-09 01:19:12.930545 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-09 01:19:12.930550 | orchestrator | Thursday 09 April 2026 01:18:51 +0000 (0:00:00.610) 0:00:02.022 ******** 2026-04-09 01:19:12.930553 | orchestrator | ok: [testbed-node-0] 2026-04-09 01:19:12.930560 | orchestrator | ok: [testbed-node-1] 2026-04-09 01:19:12.930566 | orchestrator | ok: [testbed-node-2] 2026-04-09 01:19:12.930573 | orchestrator | 2026-04-09 01:19:12.930582 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-09 01:19:12.930590 | orchestrator | Thursday 09 April 2026 01:18:54 +0000 (0:00:03.291) 0:00:05.314 ******** 2026-04-09 01:19:12.930595 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:19:12.930603 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:19:12.930609 | orchestrator | changed: [testbed-node-0] 2026-04-09 01:19:12.930615 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-09 01:19:12.930621 | orchestrator | 2026-04-09 01:19:12.930627 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 01:19:12.930633 | orchestrator | skipping: no hosts matched 2026-04-09 01:19:12.930638 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-09 01:19:12.930644 | orchestrator | 2026-04-09 01:19:12.930650 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 01:19:12.930655 | orchestrator | skipping: no hosts matched 2026-04-09 01:19:12.930661 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-09 01:19:12.930667 | orchestrator | mariadb_bootstrap_restart 2026-04-09 01:19:12.930672 | orchestrator | 2026-04-09 01:19:12.930698 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-09 01:19:12.930705 | orchestrator | skipping: no hosts matched 2026-04-09 01:19:12.930711 | orchestrator | 2026-04-09 01:19:12.930717 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-09 01:19:12.930723 | orchestrator | 2026-04-09 01:19:12.930729 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-09 01:19:12.930735 | orchestrator | Thursday 09 April 2026 01:19:12 +0000 (0:00:17.367) 0:00:22.682 ******** 2026-04-09 01:19:12.930757 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:19:12.930764 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:19:12.930771 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:19:12.930775 | orchestrator | 2026-04-09 01:19:12.930779 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-09 01:19:12.930783 | orchestrator | Thursday 09 April 2026 01:19:12 +0000 (0:00:00.304) 0:00:22.986 ******** 2026-04-09 01:19:12.930787 | orchestrator | skipping: [testbed-node-0] 2026-04-09 01:19:12.930791 | orchestrator | skipping: [testbed-node-1] 2026-04-09 01:19:12.930794 | orchestrator | skipping: [testbed-node-2] 2026-04-09 01:19:12.930798 | orchestrator | 2026-04-09 01:19:12.930802 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:19:12.930808 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 01:19:12.930819 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 01:19:12.930823 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 01:19:12.930827 | orchestrator | 2026-04-09 01:19:12.930831 | orchestrator | 2026-04-09 01:19:12.930834 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:19:12.930838 | orchestrator | Thursday 09 April 2026 01:19:12 +0000 (0:00:00.222) 0:00:23.209 ******** 2026-04-09 01:19:12.930842 | orchestrator | =============================================================================== 2026-04-09 01:19:12.930846 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.37s 2026-04-09 01:19:12.930863 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.29s 2026-04-09 01:19:12.930868 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.61s 2026-04-09 01:19:12.930872 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-04-09 01:19:12.930877 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2026-04-09 01:19:12.930881 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-09 01:19:12.930885 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-04-09 01:19:12.930890 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.22s 2026-04-09 01:19:13.120242 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-09 01:19:13.128245 | orchestrator | + set -e 2026-04-09 01:19:13.128372 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 01:19:13.128387 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 01:19:13.128395 | orchestrator | ++ INTERACTIVE=false 2026-04-09 01:19:13.128401 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 01:19:13.128408 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 01:19:13.128423 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-09 01:19:13.129846 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-09 01:19:13.134400 | orchestrator | 2026-04-09 01:19:13.134479 | orchestrator | # OpenStack endpoints 2026-04-09 01:19:13.134489 | orchestrator | 2026-04-09 01:19:13.134496 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:19:13.134503 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:19:13.134511 | orchestrator | + export OS_CLOUD=admin 2026-04-09 01:19:13.134517 | orchestrator | + OS_CLOUD=admin 2026-04-09 01:19:13.134524 | orchestrator | + echo 2026-04-09 01:19:13.134531 | orchestrator | + echo '# OpenStack endpoints' 2026-04-09 01:19:13.134538 | orchestrator | + echo 2026-04-09 01:19:13.134544 | orchestrator | + openstack endpoint list 2026-04-09 01:19:16.280989 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 01:19:16.281066 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-09 01:19:16.281078 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 01:19:16.281084 | orchestrator | | 0538079378ad474ba93653712290aeae | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-09 01:19:16.281091 | orchestrator | | 09919e6ddabf4dfcbc6b9a57e92095a2 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-09 01:19:16.281111 | orchestrator | | 0cb2d6d4f1a146cfaffa041939494dc5 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-09 01:19:16.281118 | orchestrator | | 1006938ce1de40e59adafa86ca1afb28 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-09 01:19:16.281145 | orchestrator | | 11aa86395f294285897a1e9f5b458655 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-09 01:19:16.281150 | orchestrator | | 1f3224a34e6047a5aec99d243cfdd4e2 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-09 01:19:16.281154 | orchestrator | | 22f2c31e82664a4586bfcd919cae98ef | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-09 01:19:16.281158 | orchestrator | | 2393d08682d4441eb7ac8cf1547e776b | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-09 01:19:16.281161 | orchestrator | | 23f81e5f07b941c78289ed0c11940a54 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-09 01:19:16.281165 | orchestrator | | 27ba862bae25404988971f42d3a1ca3f | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-09 01:19:16.281169 | orchestrator | | 4affe0180f084aba92cb5114aa842f0a | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-09 01:19:16.281173 | orchestrator | | 73a47d62d1d147e5b5c29e3f4bbbc1c4 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-09 01:19:16.281176 | orchestrator | | 75e17153abbb4f50abe26b9a912ca900 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-09 01:19:16.281180 | orchestrator | | 7cf9ada3ec974d5a99244d52b87782b7 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-09 01:19:16.281184 | orchestrator | | 88965b83f21446e98a5c33ab87bff713 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-09 01:19:16.281187 | orchestrator | | 92780e3071394403b495761234f38682 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-09 01:19:16.281191 | orchestrator | | aff5c396939c44a2b74132b2f160bb14 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-09 01:19:16.281195 | orchestrator | | b4458ff5fde3414586c6bb52868d8cc3 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-09 01:19:16.281199 | orchestrator | | bf6cb6b06402451e8d080405b34fc76b | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-09 01:19:16.281202 | orchestrator | | c81443afd5194c20bc8f82b3e63c7e31 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-09 01:19:16.281218 | orchestrator | | d4eb1ae631ac40e2b8c03f45bd07e5e4 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-09 01:19:16.281222 | orchestrator | | ec0adf7703f74080998ce11509438ec6 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-09 01:19:16.281226 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-09 01:19:16.499647 | orchestrator | 2026-04-09 01:19:16.499794 | orchestrator | # Cinder 2026-04-09 01:19:16.499846 | orchestrator | 2026-04-09 01:19:16.499854 | orchestrator | + echo 2026-04-09 01:19:16.499861 | orchestrator | + echo '# Cinder' 2026-04-09 01:19:16.499868 | orchestrator | + echo 2026-04-09 01:19:16.499875 | orchestrator | + openstack volume service list 2026-04-09 01:19:20.370432 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 01:19:20.370534 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-09 01:19:20.370544 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 01:19:20.370551 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-09T01:19:19.000000 | 2026-04-09 01:19:20.370574 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-09T01:19:19.000000 | 2026-04-09 01:19:20.370580 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-09T01:19:19.000000 | 2026-04-09 01:19:20.370586 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-09T01:19:19.000000 | 2026-04-09 01:19:20.370592 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-09T01:19:13.000000 | 2026-04-09 01:19:20.370599 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-09T01:19:14.000000 | 2026-04-09 01:19:20.370604 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-09T01:19:16.000000 | 2026-04-09 01:19:20.370608 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-09T01:19:19.000000 | 2026-04-09 01:19:20.370612 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-09T01:19:19.000000 | 2026-04-09 01:19:20.370616 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-09 01:19:20.723490 | orchestrator | 2026-04-09 01:19:20.723577 | orchestrator | # Neutron 2026-04-09 01:19:20.723586 | orchestrator | 2026-04-09 01:19:20.723593 | orchestrator | + echo 2026-04-09 01:19:20.723600 | orchestrator | + echo '# Neutron' 2026-04-09 01:19:20.723608 | orchestrator | + echo 2026-04-09 01:19:20.723615 | orchestrator | + openstack network agent list 2026-04-09 01:19:23.437207 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 01:19:23.437289 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-09 01:19:23.437296 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 01:19:23.437303 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-09 01:19:23.437320 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-09 01:19:23.437358 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-09 01:19:23.437366 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-09 01:19:23.437373 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-09 01:19:23.437379 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-09 01:19:23.437386 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 01:19:23.437413 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 01:19:23.437420 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-09 01:19:23.437426 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-09 01:19:23.772424 | orchestrator | + openstack network service provider list 2026-04-09 01:19:26.276082 | orchestrator | +---------------+------+---------+ 2026-04-09 01:19:26.276162 | orchestrator | | Service Type | Name | Default | 2026-04-09 01:19:26.276168 | orchestrator | +---------------+------+---------+ 2026-04-09 01:19:26.276172 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-09 01:19:26.276177 | orchestrator | +---------------+------+---------+ 2026-04-09 01:19:26.513988 | orchestrator | 2026-04-09 01:19:26.514185 | orchestrator | # Nova 2026-04-09 01:19:26.514199 | orchestrator | 2026-04-09 01:19:26.514205 | orchestrator | + echo 2026-04-09 01:19:26.514212 | orchestrator | + echo '# Nova' 2026-04-09 01:19:26.514219 | orchestrator | + echo 2026-04-09 01:19:26.514225 | orchestrator | + openstack compute service list 2026-04-09 01:19:29.293131 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 01:19:29.293226 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-09 01:19:29.293235 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 01:19:29.293242 | orchestrator | | c619a698-d924-4039-a953-05927834753d | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-09T01:19:22.000000 | 2026-04-09 01:19:29.293249 | orchestrator | | a87971a4-6a91-4d36-90af-a16cc02c98ce | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-09T01:19:24.000000 | 2026-04-09 01:19:29.293255 | orchestrator | | 9eab44e0-1db0-419d-8b12-a835b2e40232 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-09T01:19:20.000000 | 2026-04-09 01:19:29.293279 | orchestrator | | 0176a932-f5cc-4669-8559-e5c2fc24027b | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-09T01:19:23.000000 | 2026-04-09 01:19:29.293287 | orchestrator | | 268aebc7-2f9f-464d-aff4-cfecf0f29563 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-09T01:19:25.000000 | 2026-04-09 01:19:29.293293 | orchestrator | | 2c04a445-19f5-452d-91a1-efebaff59605 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-09T01:19:26.000000 | 2026-04-09 01:19:29.293299 | orchestrator | | 2f38a126-c2aa-4191-8d05-40df6c57162a | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-09T01:19:28.000000 | 2026-04-09 01:19:29.293305 | orchestrator | | a5b7fdea-5152-48d8-9088-ef5d3c7b9b86 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-09T01:19:28.000000 | 2026-04-09 01:19:29.293311 | orchestrator | | 08ee9582-9f34-4bc2-84ba-cacd01922fdc | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-09T01:19:28.000000 | 2026-04-09 01:19:29.293318 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-09 01:19:29.534661 | orchestrator | + openstack hypervisor list 2026-04-09 01:19:32.739855 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 01:19:32.739985 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-09 01:19:32.739995 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 01:19:32.739999 | orchestrator | | d5892215-c3b5-402b-ab13-99e7288cea5a | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-09 01:19:32.740003 | orchestrator | | 3d91c3e6-f4ba-4e3e-aa83-16d5cb25eccb | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-09 01:19:32.740007 | orchestrator | | 757af29b-6e3b-4b2c-abbf-821591fea483 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-09 01:19:32.740031 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-09 01:19:32.971258 | orchestrator | 2026-04-09 01:19:32.971326 | orchestrator | # Run OpenStack test play 2026-04-09 01:19:32.971333 | orchestrator | 2026-04-09 01:19:32.971337 | orchestrator | + echo 2026-04-09 01:19:32.971342 | orchestrator | + echo '# Run OpenStack test play' 2026-04-09 01:19:32.971347 | orchestrator | + echo 2026-04-09 01:19:32.971352 | orchestrator | + osism apply --environment openstack test 2026-04-09 01:19:34.263267 | orchestrator | 2026-04-09 01:19:34 | INFO  | Trying to run play test in environment openstack 2026-04-09 01:19:43.647009 | orchestrator | 2026-04-09 01:19:43 | INFO  | Prepare task for execution of test. 2026-04-09 01:19:43.727238 | orchestrator | 2026-04-09 01:19:43 | INFO  | Task 00b8e843-2db9-4ff3-86d5-8791cd5feab5 (test) was prepared for execution. 2026-04-09 01:19:43.727353 | orchestrator | 2026-04-09 01:19:43 | INFO  | It takes a moment until task 00b8e843-2db9-4ff3-86d5-8791cd5feab5 (test) has been started and output is visible here. 2026-04-09 01:22:57.497143 | orchestrator | 2026-04-09 01:22:57.497240 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-09 01:22:57.497249 | orchestrator | 2026-04-09 01:22:57.497269 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-09 01:22:57.497280 | orchestrator | Thursday 09 April 2026 01:19:46 +0000 (0:00:00.105) 0:00:00.105 ******** 2026-04-09 01:22:57.497285 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497290 | orchestrator | 2026-04-09 01:22:57.497294 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-09 01:22:57.497299 | orchestrator | Thursday 09 April 2026 01:19:50 +0000 (0:00:03.769) 0:00:03.875 ******** 2026-04-09 01:22:57.497303 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497307 | orchestrator | 2026-04-09 01:22:57.497311 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-09 01:22:57.497315 | orchestrator | Thursday 09 April 2026 01:19:54 +0000 (0:00:04.292) 0:00:08.168 ******** 2026-04-09 01:22:57.497319 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497323 | orchestrator | 2026-04-09 01:22:57.497326 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-09 01:22:57.497330 | orchestrator | Thursday 09 April 2026 01:20:01 +0000 (0:00:06.405) 0:00:14.573 ******** 2026-04-09 01:22:57.497334 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497338 | orchestrator | 2026-04-09 01:22:57.497342 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-09 01:22:57.497346 | orchestrator | Thursday 09 April 2026 01:20:05 +0000 (0:00:04.186) 0:00:18.760 ******** 2026-04-09 01:22:57.497349 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497359 | orchestrator | 2026-04-09 01:22:57.497363 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-09 01:22:57.497367 | orchestrator | Thursday 09 April 2026 01:20:09 +0000 (0:00:04.327) 0:00:23.087 ******** 2026-04-09 01:22:57.497371 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-09 01:22:57.497375 | orchestrator | changed: [localhost] => (item=member) 2026-04-09 01:22:57.497380 | orchestrator | changed: [localhost] => (item=creator) 2026-04-09 01:22:57.497384 | orchestrator | 2026-04-09 01:22:57.497388 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-09 01:22:57.497392 | orchestrator | Thursday 09 April 2026 01:20:21 +0000 (0:00:11.893) 0:00:34.981 ******** 2026-04-09 01:22:57.497396 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497399 | orchestrator | 2026-04-09 01:22:57.497403 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-09 01:22:57.497407 | orchestrator | Thursday 09 April 2026 01:20:26 +0000 (0:00:04.820) 0:00:39.801 ******** 2026-04-09 01:22:57.497411 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497415 | orchestrator | 2026-04-09 01:22:57.497419 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-09 01:22:57.497439 | orchestrator | Thursday 09 April 2026 01:20:31 +0000 (0:00:04.944) 0:00:44.745 ******** 2026-04-09 01:22:57.497443 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497447 | orchestrator | 2026-04-09 01:22:57.497451 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-09 01:22:57.497454 | orchestrator | Thursday 09 April 2026 01:20:36 +0000 (0:00:04.490) 0:00:49.236 ******** 2026-04-09 01:22:57.497458 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497462 | orchestrator | 2026-04-09 01:22:57.497466 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-09 01:22:57.497469 | orchestrator | Thursday 09 April 2026 01:20:40 +0000 (0:00:04.237) 0:00:53.474 ******** 2026-04-09 01:22:57.497473 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497480 | orchestrator | 2026-04-09 01:22:57.497486 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-09 01:22:57.497492 | orchestrator | Thursday 09 April 2026 01:20:44 +0000 (0:00:04.170) 0:00:57.645 ******** 2026-04-09 01:22:57.497498 | orchestrator | changed: [localhost] 2026-04-09 01:22:57.497504 | orchestrator | 2026-04-09 01:22:57.497511 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-09 01:22:57.497517 | orchestrator | Thursday 09 April 2026 01:20:48 +0000 (0:00:03.919) 0:01:01.564 ******** 2026-04-09 01:22:57.497523 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-09 01:22:57.497528 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-09 01:22:57.497535 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-09 01:22:57.497540 | orchestrator | 2026-04-09 01:22:57.497547 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-09 01:22:57.497553 | orchestrator | Thursday 09 April 2026 01:21:01 +0000 (0:00:13.339) 0:01:14.903 ******** 2026-04-09 01:22:57.497561 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-09 01:22:57.497568 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-09 01:22:57.497575 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-09 01:22:57.497586 | orchestrator | 2026-04-09 01:22:57.497593 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-09 01:22:57.497599 | orchestrator | Thursday 09 April 2026 01:21:17 +0000 (0:00:16.201) 0:01:31.104 ******** 2026-04-09 01:22:57.497605 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-09 01:22:57.497612 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-09 01:22:57.497619 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-09 01:22:57.497623 | orchestrator | 2026-04-09 01:22:57.497627 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-09 01:22:57.497631 | orchestrator | 2026-04-09 01:22:57.497635 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-09 01:22:57.497652 | orchestrator | Thursday 09 April 2026 01:21:50 +0000 (0:00:32.706) 0:02:03.811 ******** 2026-04-09 01:22:57.497656 | orchestrator | ok: [localhost] 2026-04-09 01:22:57.497660 | orchestrator | 2026-04-09 01:22:57.497665 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-09 01:22:57.497670 | orchestrator | Thursday 09 April 2026 01:21:54 +0000 (0:00:03.422) 0:02:07.233 ******** 2026-04-09 01:22:57.497687 | orchestrator | skipping: [localhost] 2026-04-09 01:22:57.497691 | orchestrator | 2026-04-09 01:22:57.497696 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-09 01:22:57.497700 | orchestrator | Thursday 09 April 2026 01:21:54 +0000 (0:00:00.046) 0:02:07.279 ******** 2026-04-09 01:22:57.497705 | orchestrator | skipping: [localhost] 2026-04-09 01:22:57.497709 | orchestrator | 2026-04-09 01:22:57.497713 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-09 01:22:57.497724 | orchestrator | Thursday 09 April 2026 01:21:54 +0000 (0:00:00.040) 0:02:07.320 ******** 2026-04-09 01:22:57.497729 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-09 01:22:57.497744 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-09 01:22:57.497751 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-09 01:22:57.497764 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-09 01:22:57.497771 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-09 01:22:57.497778 | orchestrator | skipping: [localhost] 2026-04-09 01:22:57.497784 | orchestrator | 2026-04-09 01:22:57.497791 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-09 01:22:57.497798 | orchestrator | Thursday 09 April 2026 01:21:54 +0000 (0:00:00.162) 0:02:07.483 ******** 2026-04-09 01:22:57.497804 | orchestrator | skipping: [localhost] 2026-04-09 01:22:57.497810 | orchestrator | 2026-04-09 01:22:57.497818 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-09 01:22:57.497824 | orchestrator | Thursday 09 April 2026 01:21:54 +0000 (0:00:00.152) 0:02:07.635 ******** 2026-04-09 01:22:57.497831 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 01:22:57.497838 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 01:22:57.497845 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 01:22:57.497851 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 01:22:57.497862 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 01:22:57.497869 | orchestrator | 2026-04-09 01:22:57.497875 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-09 01:22:57.497883 | orchestrator | Thursday 09 April 2026 01:21:59 +0000 (0:00:04.628) 0:02:12.263 ******** 2026-04-09 01:22:57.497888 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-09 01:22:57.497892 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-09 01:22:57.497896 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-09 01:22:57.497900 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-09 01:22:57.497904 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-04-09 01:22:57.497909 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j755945715363.2777', 'results_file': '/ansible/.ansible_async/j755945715363.2777', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:22:57.497916 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j365500025166.2802', 'results_file': '/ansible/.ansible_async/j365500025166.2802', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:22:57.497920 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j992132274175.2827', 'results_file': '/ansible/.ansible_async/j992132274175.2827', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:22:57.497924 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j54138671513.2852', 'results_file': '/ansible/.ansible_async/j54138671513.2852', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:22:57.497928 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j399497797702.2877', 'results_file': '/ansible/.ansible_async/j399497797702.2877', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:22:57.497937 | orchestrator | 2026-04-09 01:22:57.497942 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-09 01:22:57.497947 | orchestrator | Thursday 09 April 2026 01:22:56 +0000 (0:00:57.486) 0:03:09.749 ******** 2026-04-09 01:22:57.497957 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 01:24:08.912664 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 01:24:08.912763 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 01:24:08.912776 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 01:24:08.912785 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 01:24:08.912793 | orchestrator | 2026-04-09 01:24:08.912801 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-09 01:24:08.912809 | orchestrator | Thursday 09 April 2026 01:23:01 +0000 (0:00:04.639) 0:03:14.394 ******** 2026-04-09 01:24:08.912816 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-09 01:24:08.912827 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j285216089823.2988', 'results_file': '/ansible/.ansible_async/j285216089823.2988', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:08.912838 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j752003782020.3013', 'results_file': '/ansible/.ansible_async/j752003782020.3013', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:08.912845 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j477371412401.3038', 'results_file': '/ansible/.ansible_async/j477371412401.3038', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:08.912853 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j942750903252.3063', 'results_file': '/ansible/.ansible_async/j942750903252.3063', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:08.912874 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j789093234712.3088', 'results_file': '/ansible/.ansible_async/j789093234712.3088', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:08.912882 | orchestrator | 2026-04-09 01:24:08.912890 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-09 01:24:08.912897 | orchestrator | Thursday 09 April 2026 01:23:10 +0000 (0:00:09.461) 0:03:23.855 ******** 2026-04-09 01:24:08.912904 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 01:24:08.912912 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 01:24:08.912918 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 01:24:08.912925 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 01:24:08.912932 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 01:24:08.912939 | orchestrator | 2026-04-09 01:24:08.912947 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-09 01:24:08.912954 | orchestrator | Thursday 09 April 2026 01:23:15 +0000 (0:00:04.552) 0:03:28.408 ******** 2026-04-09 01:24:08.912961 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-09 01:24:08.912990 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j584893506807.3157', 'results_file': '/ansible/.ansible_async/j584893506807.3157', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:08.912998 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j130471864292.3182', 'results_file': '/ansible/.ansible_async/j130471864292.3182', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:08.913007 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j799669786808.3215', 'results_file': '/ansible/.ansible_async/j799669786808.3215', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:08.913014 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j134167021117.3241', 'results_file': '/ansible/.ansible_async/j134167021117.3241', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:08.913036 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j689715900379.3267', 'results_file': '/ansible/.ansible_async/j689715900379.3267', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-09 01:24:08.913045 | orchestrator | 2026-04-09 01:24:08.913052 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-09 01:24:08.913059 | orchestrator | Thursday 09 April 2026 01:23:25 +0000 (0:00:10.036) 0:03:38.445 ******** 2026-04-09 01:24:08.913066 | orchestrator | changed: [localhost] 2026-04-09 01:24:08.913075 | orchestrator | 2026-04-09 01:24:08.913082 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-09 01:24:08.913089 | orchestrator | Thursday 09 April 2026 01:23:31 +0000 (0:00:06.725) 0:03:45.170 ******** 2026-04-09 01:24:08.913096 | orchestrator | changed: [localhost] 2026-04-09 01:24:08.913104 | orchestrator | 2026-04-09 01:24:08.913111 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-09 01:24:08.913118 | orchestrator | Thursday 09 April 2026 01:23:45 +0000 (0:00:13.539) 0:03:58.710 ******** 2026-04-09 01:24:08.913126 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-09 01:24:08.913133 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-09 01:24:08.913140 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-09 01:24:08.913147 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-09 01:24:08.913154 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-09 01:24:08.913184 | orchestrator | 2026-04-09 01:24:08.913191 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-09 01:24:08.913196 | orchestrator | Thursday 09 April 2026 01:24:08 +0000 (0:00:23.116) 0:04:21.826 ******** 2026-04-09 01:24:08.913202 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-09 01:24:08.913208 | orchestrator |  "msg": "test: 192.168.112.193" 2026-04-09 01:24:08.913214 | orchestrator | } 2026-04-09 01:24:08.913221 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-09 01:24:08.913230 | orchestrator |  "msg": "test-1: 192.168.112.162" 2026-04-09 01:24:08.913238 | orchestrator | } 2026-04-09 01:24:08.913245 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-09 01:24:08.913252 | orchestrator |  "msg": "test-2: 192.168.112.197" 2026-04-09 01:24:08.913260 | orchestrator | } 2026-04-09 01:24:08.913269 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-09 01:24:08.913278 | orchestrator |  "msg": "test-3: 192.168.112.177" 2026-04-09 01:24:08.913286 | orchestrator | } 2026-04-09 01:24:08.913294 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-09 01:24:08.913309 | orchestrator |  "msg": "test-4: 192.168.112.146" 2026-04-09 01:24:08.913322 | orchestrator | } 2026-04-09 01:24:08.913331 | orchestrator | 2026-04-09 01:24:08.913336 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:24:08.913342 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 01:24:08.913350 | orchestrator | 2026-04-09 01:24:08.913356 | orchestrator | 2026-04-09 01:24:08.913363 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:24:08.913370 | orchestrator | Thursday 09 April 2026 01:24:08 +0000 (0:00:00.113) 0:04:21.940 ******** 2026-04-09 01:24:08.913377 | orchestrator | =============================================================================== 2026-04-09 01:24:08.913383 | orchestrator | Wait for instance creation to complete --------------------------------- 57.49s 2026-04-09 01:24:08.913390 | orchestrator | Create test routers ---------------------------------------------------- 32.71s 2026-04-09 01:24:08.913397 | orchestrator | Create floating ip addresses ------------------------------------------- 23.12s 2026-04-09 01:24:08.913404 | orchestrator | Create test subnets ---------------------------------------------------- 16.20s 2026-04-09 01:24:08.913411 | orchestrator | Attach test volume ----------------------------------------------------- 13.54s 2026-04-09 01:24:08.913418 | orchestrator | Create test networks --------------------------------------------------- 13.34s 2026-04-09 01:24:08.913425 | orchestrator | Add member roles to user test ------------------------------------------ 11.89s 2026-04-09 01:24:08.913433 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.04s 2026-04-09 01:24:08.913440 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.46s 2026-04-09 01:24:08.913447 | orchestrator | Create test volume ------------------------------------------------------ 6.73s 2026-04-09 01:24:08.913454 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.41s 2026-04-09 01:24:08.913461 | orchestrator | Create ssh security group ----------------------------------------------- 4.94s 2026-04-09 01:24:08.913467 | orchestrator | Create test server group ------------------------------------------------ 4.82s 2026-04-09 01:24:08.913475 | orchestrator | Add metadata to instances ----------------------------------------------- 4.64s 2026-04-09 01:24:08.913482 | orchestrator | Create test instances --------------------------------------------------- 4.63s 2026-04-09 01:24:08.913489 | orchestrator | Add tag to instances ---------------------------------------------------- 4.55s 2026-04-09 01:24:08.913496 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.49s 2026-04-09 01:24:08.913503 | orchestrator | Create test user -------------------------------------------------------- 4.33s 2026-04-09 01:24:08.913509 | orchestrator | Create test-admin user -------------------------------------------------- 4.29s 2026-04-09 01:24:08.913516 | orchestrator | Create icmp security group ---------------------------------------------- 4.24s 2026-04-09 01:24:09.094004 | orchestrator | + server_list 2026-04-09 01:24:09.094123 | orchestrator | + openstack --os-cloud test server list 2026-04-09 01:24:12.413458 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 01:24:12.413514 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-09 01:24:12.413522 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 01:24:12.413529 | orchestrator | | 431475d2-712a-4ad4-8d42-2e17c68412a9 | test-3 | ACTIVE | test-2=192.168.112.177, 192.168.201.253 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 01:24:12.413535 | orchestrator | | ed03a7ec-6453-4595-813e-138b4e99232f | test-4 | ACTIVE | test-3=192.168.112.146, 192.168.202.56 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 01:24:12.413549 | orchestrator | | 301a9bf8-feb2-491f-998f-2a78fd251591 | test-2 | ACTIVE | test-2=192.168.112.197, 192.168.201.133 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 01:24:12.413575 | orchestrator | | 58e660c9-45ac-4eae-bbce-5a216788406f | test-1 | ACTIVE | test-1=192.168.112.162, 192.168.200.244 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 01:24:12.413582 | orchestrator | | a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 | test | ACTIVE | test-1=192.168.112.193, 192.168.200.143 | N/A (booted from volume) | SCS-1L-1 | 2026-04-09 01:24:12.413588 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-09 01:24:12.644650 | orchestrator | + openstack --os-cloud test server show test 2026-04-09 01:24:15.697865 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:15.697941 | orchestrator | | Field | Value | 2026-04-09 01:24:15.697950 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:15.697957 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 01:24:15.697963 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 01:24:15.697969 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 01:24:15.697972 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-09 01:24:15.697976 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 01:24:15.697987 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 01:24:15.697998 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 01:24:15.698001 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 01:24:15.698004 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 01:24:15.698008 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 01:24:15.698036 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 01:24:15.698040 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 01:24:15.698043 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 01:24:15.698047 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 01:24:15.698055 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 01:24:15.698058 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T01:22:32.000000 | 2026-04-09 01:24:15.698064 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 01:24:15.698067 | orchestrator | | accessIPv4 | | 2026-04-09 01:24:15.698072 | orchestrator | | accessIPv6 | | 2026-04-09 01:24:15.698075 | orchestrator | | addresses | test-1=192.168.112.193, 192.168.200.143 | 2026-04-09 01:24:15.698078 | orchestrator | | config_drive | | 2026-04-09 01:24:15.698082 | orchestrator | | created | 2026-04-09T01:22:04Z | 2026-04-09 01:24:15.698085 | orchestrator | | description | None | 2026-04-09 01:24:15.698088 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 01:24:15.698093 | orchestrator | | hostId | 928602be3900415ea16eb4d91ed5b766ffc301af459d4d7382a85eb9 | 2026-04-09 01:24:15.698096 | orchestrator | | host_status | None | 2026-04-09 01:24:15.698102 | orchestrator | | id | a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 | 2026-04-09 01:24:15.698106 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 01:24:15.698110 | orchestrator | | key_name | test | 2026-04-09 01:24:15.698114 | orchestrator | | locked | False | 2026-04-09 01:24:15.698117 | orchestrator | | locked_reason | None | 2026-04-09 01:24:15.698120 | orchestrator | | name | test | 2026-04-09 01:24:15.698123 | orchestrator | | pinned_availability_zone | None | 2026-04-09 01:24:15.698128 | orchestrator | | progress | 0 | 2026-04-09 01:24:15.698131 | orchestrator | | project_id | e3aa4afcdfbe40eebab4a27012713edd | 2026-04-09 01:24:15.698135 | orchestrator | | properties | hostname='test' | 2026-04-09 01:24:15.698140 | orchestrator | | security_groups | name='ssh' | 2026-04-09 01:24:15.698143 | orchestrator | | | name='icmp' | 2026-04-09 01:24:15.698148 | orchestrator | | server_groups | None | 2026-04-09 01:24:15.698152 | orchestrator | | status | ACTIVE | 2026-04-09 01:24:15.698155 | orchestrator | | tags | test | 2026-04-09 01:24:15.698158 | orchestrator | | trusted_image_certificates | None | 2026-04-09 01:24:15.698167 | orchestrator | | updated | 2026-04-09T01:23:02Z | 2026-04-09 01:24:15.698211 | orchestrator | | user_id | b412d983bdd24e2da316ad547311cbe5 | 2026-04-09 01:24:15.698226 | orchestrator | | volumes_attached | delete_on_termination='True', id='ccd7fa22-6c43-471f-9732-5d875abea5ae' | 2026-04-09 01:24:15.698230 | orchestrator | | | delete_on_termination='False', id='45c16d32-b3f3-4e1a-9e43-4882f5acb12e' | 2026-04-09 01:24:15.702560 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:15.951903 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-09 01:24:18.700965 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:18.701019 | orchestrator | | Field | Value | 2026-04-09 01:24:18.701027 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:18.701034 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 01:24:18.701050 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 01:24:18.701056 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 01:24:18.701063 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-09 01:24:18.701069 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 01:24:18.701075 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 01:24:18.701090 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 01:24:18.701099 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 01:24:18.701106 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 01:24:18.701113 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 01:24:18.701123 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 01:24:18.701143 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 01:24:18.701156 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 01:24:18.701166 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 01:24:18.701219 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 01:24:18.701229 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T01:22:31.000000 | 2026-04-09 01:24:18.701243 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 01:24:18.701252 | orchestrator | | accessIPv4 | | 2026-04-09 01:24:18.701262 | orchestrator | | accessIPv6 | | 2026-04-09 01:24:18.701289 | orchestrator | | addresses | test-1=192.168.112.162, 192.168.200.244 | 2026-04-09 01:24:18.701314 | orchestrator | | config_drive | | 2026-04-09 01:24:18.701324 | orchestrator | | created | 2026-04-09T01:22:04Z | 2026-04-09 01:24:18.701331 | orchestrator | | description | None | 2026-04-09 01:24:18.701337 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 01:24:18.701343 | orchestrator | | hostId | 928602be3900415ea16eb4d91ed5b766ffc301af459d4d7382a85eb9 | 2026-04-09 01:24:18.701349 | orchestrator | | host_status | None | 2026-04-09 01:24:18.701360 | orchestrator | | id | 58e660c9-45ac-4eae-bbce-5a216788406f | 2026-04-09 01:24:18.701369 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 01:24:18.701376 | orchestrator | | key_name | test | 2026-04-09 01:24:18.701385 | orchestrator | | locked | False | 2026-04-09 01:24:18.701391 | orchestrator | | locked_reason | None | 2026-04-09 01:24:18.701397 | orchestrator | | name | test-1 | 2026-04-09 01:24:18.701403 | orchestrator | | pinned_availability_zone | None | 2026-04-09 01:24:18.701409 | orchestrator | | progress | 0 | 2026-04-09 01:24:18.701415 | orchestrator | | project_id | e3aa4afcdfbe40eebab4a27012713edd | 2026-04-09 01:24:18.701421 | orchestrator | | properties | hostname='test-1' | 2026-04-09 01:24:18.701431 | orchestrator | | security_groups | name='ssh' | 2026-04-09 01:24:18.701440 | orchestrator | | | name='icmp' | 2026-04-09 01:24:18.701450 | orchestrator | | server_groups | None | 2026-04-09 01:24:18.701456 | orchestrator | | status | ACTIVE | 2026-04-09 01:24:18.701462 | orchestrator | | tags | test | 2026-04-09 01:24:18.701468 | orchestrator | | trusted_image_certificates | None | 2026-04-09 01:24:18.701474 | orchestrator | | updated | 2026-04-09T01:23:03Z | 2026-04-09 01:24:18.701480 | orchestrator | | user_id | b412d983bdd24e2da316ad547311cbe5 | 2026-04-09 01:24:18.701486 | orchestrator | | volumes_attached | delete_on_termination='True', id='d2e63f24-7e28-44c7-abd8-5f34b92de6d6' | 2026-04-09 01:24:18.706668 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:18.952902 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-09 01:24:21.684810 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:21.684923 | orchestrator | | Field | Value | 2026-04-09 01:24:21.684932 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:21.684937 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 01:24:21.684943 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 01:24:21.684950 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 01:24:21.684955 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-09 01:24:21.684961 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 01:24:21.684967 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 01:24:21.684987 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 01:24:21.684993 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 01:24:21.685007 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 01:24:21.685012 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 01:24:21.685018 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 01:24:21.685024 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 01:24:21.685030 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 01:24:21.685035 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 01:24:21.685041 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 01:24:21.685047 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T01:22:32.000000 | 2026-04-09 01:24:21.685057 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 01:24:21.685074 | orchestrator | | accessIPv4 | | 2026-04-09 01:24:21.685084 | orchestrator | | accessIPv6 | | 2026-04-09 01:24:21.685090 | orchestrator | | addresses | test-2=192.168.112.197, 192.168.201.133 | 2026-04-09 01:24:21.685096 | orchestrator | | config_drive | | 2026-04-09 01:24:21.685103 | orchestrator | | created | 2026-04-09T01:22:04Z | 2026-04-09 01:24:21.685107 | orchestrator | | description | None | 2026-04-09 01:24:21.685112 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 01:24:21.685119 | orchestrator | | hostId | eb424ca9cd5099484f3dbc2c01cc762cd4154f12b31acb0eb8663a78 | 2026-04-09 01:24:21.685124 | orchestrator | | host_status | None | 2026-04-09 01:24:21.685139 | orchestrator | | id | 301a9bf8-feb2-491f-998f-2a78fd251591 | 2026-04-09 01:24:21.685146 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 01:24:21.685153 | orchestrator | | key_name | test | 2026-04-09 01:24:21.685159 | orchestrator | | locked | False | 2026-04-09 01:24:21.685166 | orchestrator | | locked_reason | None | 2026-04-09 01:24:21.685268 | orchestrator | | name | test-2 | 2026-04-09 01:24:21.685275 | orchestrator | | pinned_availability_zone | None | 2026-04-09 01:24:21.685279 | orchestrator | | progress | 0 | 2026-04-09 01:24:21.685289 | orchestrator | | project_id | e3aa4afcdfbe40eebab4a27012713edd | 2026-04-09 01:24:21.685298 | orchestrator | | properties | hostname='test-2' | 2026-04-09 01:24:21.685307 | orchestrator | | security_groups | name='ssh' | 2026-04-09 01:24:21.685313 | orchestrator | | | name='icmp' | 2026-04-09 01:24:21.685317 | orchestrator | | server_groups | None | 2026-04-09 01:24:21.685322 | orchestrator | | status | ACTIVE | 2026-04-09 01:24:21.685326 | orchestrator | | tags | test | 2026-04-09 01:24:21.685330 | orchestrator | | trusted_image_certificates | None | 2026-04-09 01:24:21.685333 | orchestrator | | updated | 2026-04-09T01:23:03Z | 2026-04-09 01:24:21.685337 | orchestrator | | user_id | b412d983bdd24e2da316ad547311cbe5 | 2026-04-09 01:24:21.685341 | orchestrator | | volumes_attached | delete_on_termination='True', id='02fa696e-239b-461c-8bb5-f0cca6eef6f6' | 2026-04-09 01:24:21.689330 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:21.914679 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-09 01:24:24.660577 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:24.660653 | orchestrator | | Field | Value | 2026-04-09 01:24:24.660665 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:24.660674 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 01:24:24.660681 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 01:24:24.660689 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 01:24:24.660695 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-09 01:24:24.660702 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 01:24:24.660722 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 01:24:24.660742 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 01:24:24.660751 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 01:24:24.660761 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 01:24:24.660768 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 01:24:24.660775 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 01:24:24.660782 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 01:24:24.660790 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 01:24:24.660797 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 01:24:24.660811 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 01:24:24.660817 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T01:22:33.000000 | 2026-04-09 01:24:24.660829 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 01:24:24.660836 | orchestrator | | accessIPv4 | | 2026-04-09 01:24:24.660845 | orchestrator | | accessIPv6 | | 2026-04-09 01:24:24.660852 | orchestrator | | addresses | test-2=192.168.112.177, 192.168.201.253 | 2026-04-09 01:24:24.660860 | orchestrator | | config_drive | | 2026-04-09 01:24:24.660867 | orchestrator | | created | 2026-04-09T01:22:07Z | 2026-04-09 01:24:24.660874 | orchestrator | | description | None | 2026-04-09 01:24:24.660881 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 01:24:24.660886 | orchestrator | | hostId | eb424ca9cd5099484f3dbc2c01cc762cd4154f12b31acb0eb8663a78 | 2026-04-09 01:24:24.660891 | orchestrator | | host_status | None | 2026-04-09 01:24:24.660898 | orchestrator | | id | 431475d2-712a-4ad4-8d42-2e17c68412a9 | 2026-04-09 01:24:24.660903 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 01:24:24.660910 | orchestrator | | key_name | test | 2026-04-09 01:24:24.660915 | orchestrator | | locked | False | 2026-04-09 01:24:24.660919 | orchestrator | | locked_reason | None | 2026-04-09 01:24:24.660924 | orchestrator | | name | test-3 | 2026-04-09 01:24:24.660931 | orchestrator | | pinned_availability_zone | None | 2026-04-09 01:24:24.660935 | orchestrator | | progress | 0 | 2026-04-09 01:24:24.660940 | orchestrator | | project_id | e3aa4afcdfbe40eebab4a27012713edd | 2026-04-09 01:24:24.660944 | orchestrator | | properties | hostname='test-3' | 2026-04-09 01:24:24.660952 | orchestrator | | security_groups | name='ssh' | 2026-04-09 01:24:24.660956 | orchestrator | | | name='icmp' | 2026-04-09 01:24:24.660966 | orchestrator | | server_groups | None | 2026-04-09 01:24:24.660971 | orchestrator | | status | ACTIVE | 2026-04-09 01:24:24.660975 | orchestrator | | tags | test | 2026-04-09 01:24:24.660983 | orchestrator | | trusted_image_certificates | None | 2026-04-09 01:24:24.660994 | orchestrator | | updated | 2026-04-09T01:23:04Z | 2026-04-09 01:24:24.661002 | orchestrator | | user_id | b412d983bdd24e2da316ad547311cbe5 | 2026-04-09 01:24:24.661009 | orchestrator | | volumes_attached | delete_on_termination='True', id='2ba97f24-2c86-438a-9940-b9e4bad32001' | 2026-04-09 01:24:24.665463 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:24.901411 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-09 01:24:27.815129 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:27.815245 | orchestrator | | Field | Value | 2026-04-09 01:24:27.815260 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:27.815268 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-09 01:24:27.815275 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-09 01:24:27.815303 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-09 01:24:27.815310 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-09 01:24:27.815316 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-09 01:24:27.815322 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-09 01:24:27.815346 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-09 01:24:27.815666 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-09 01:24:27.815685 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-09 01:24:27.815694 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-09 01:24:27.815701 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-09 01:24:27.815718 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-09 01:24:27.815726 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-09 01:24:27.815733 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-09 01:24:27.815740 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-09 01:24:27.815747 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-09T01:22:31.000000 | 2026-04-09 01:24:27.815768 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-09 01:24:27.815775 | orchestrator | | accessIPv4 | | 2026-04-09 01:24:27.815783 | orchestrator | | accessIPv6 | | 2026-04-09 01:24:27.815789 | orchestrator | | addresses | test-3=192.168.112.146, 192.168.202.56 | 2026-04-09 01:24:27.815802 | orchestrator | | config_drive | | 2026-04-09 01:24:27.815808 | orchestrator | | created | 2026-04-09T01:22:07Z | 2026-04-09 01:24:27.815814 | orchestrator | | description | None | 2026-04-09 01:24:27.815821 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-09 01:24:27.815828 | orchestrator | | hostId | 928602be3900415ea16eb4d91ed5b766ffc301af459d4d7382a85eb9 | 2026-04-09 01:24:27.815837 | orchestrator | | host_status | None | 2026-04-09 01:24:27.815850 | orchestrator | | id | ed03a7ec-6453-4595-813e-138b4e99232f | 2026-04-09 01:24:27.815857 | orchestrator | | image | N/A (booted from volume) | 2026-04-09 01:24:27.815864 | orchestrator | | key_name | test | 2026-04-09 01:24:27.815888 | orchestrator | | locked | False | 2026-04-09 01:24:27.815894 | orchestrator | | locked_reason | None | 2026-04-09 01:24:27.815898 | orchestrator | | name | test-4 | 2026-04-09 01:24:27.815902 | orchestrator | | pinned_availability_zone | None | 2026-04-09 01:24:27.815906 | orchestrator | | progress | 0 | 2026-04-09 01:24:27.815910 | orchestrator | | project_id | e3aa4afcdfbe40eebab4a27012713edd | 2026-04-09 01:24:27.815917 | orchestrator | | properties | hostname='test-4' | 2026-04-09 01:24:27.815926 | orchestrator | | security_groups | name='ssh' | 2026-04-09 01:24:27.815930 | orchestrator | | | name='icmp' | 2026-04-09 01:24:27.815934 | orchestrator | | server_groups | None | 2026-04-09 01:24:27.815941 | orchestrator | | status | ACTIVE | 2026-04-09 01:24:27.815945 | orchestrator | | tags | test | 2026-04-09 01:24:27.815949 | orchestrator | | trusted_image_certificates | None | 2026-04-09 01:24:27.815953 | orchestrator | | updated | 2026-04-09T01:23:05Z | 2026-04-09 01:24:27.815957 | orchestrator | | user_id | b412d983bdd24e2da316ad547311cbe5 | 2026-04-09 01:24:27.815961 | orchestrator | | volumes_attached | delete_on_termination='True', id='3e49d3ea-2e66-4b43-b4c5-395647a4204b' | 2026-04-09 01:24:27.820056 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-09 01:24:28.070942 | orchestrator | + server_ping 2026-04-09 01:24:28.072196 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 01:24:28.072252 | orchestrator | ++ tr -d '\r' 2026-04-09 01:24:30.847287 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:24:30.847565 | orchestrator | + ping -c3 192.168.112.177 2026-04-09 01:24:30.860846 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2026-04-09 01:24:30.860909 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=6.45 ms 2026-04-09 01:24:31.857908 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=1.95 ms 2026-04-09 01:24:32.858404 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=1.20 ms 2026-04-09 01:24:32.858461 | orchestrator | 2026-04-09 01:24:32.858468 | orchestrator | --- 192.168.112.177 ping statistics --- 2026-04-09 01:24:32.858473 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:24:32.858477 | orchestrator | rtt min/avg/max/mdev = 1.197/3.199/6.449/2.318 ms 2026-04-09 01:24:32.858636 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:24:32.858654 | orchestrator | + ping -c3 192.168.112.162 2026-04-09 01:24:32.868211 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-04-09 01:24:32.868269 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=4.44 ms 2026-04-09 01:24:33.866562 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=1.78 ms 2026-04-09 01:24:34.868565 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.29 ms 2026-04-09 01:24:34.868624 | orchestrator | 2026-04-09 01:24:34.868630 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-04-09 01:24:34.868635 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:24:34.868640 | orchestrator | rtt min/avg/max/mdev = 1.287/2.502/4.438/1.383 ms 2026-04-09 01:24:34.869121 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:24:34.869135 | orchestrator | + ping -c3 192.168.112.193 2026-04-09 01:24:34.877195 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-09 01:24:34.877246 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=3.92 ms 2026-04-09 01:24:35.875819 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=1.60 ms 2026-04-09 01:24:36.878661 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.83 ms 2026-04-09 01:24:36.878731 | orchestrator | 2026-04-09 01:24:36.878740 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-09 01:24:36.878750 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:24:36.878758 | orchestrator | rtt min/avg/max/mdev = 1.599/2.450/3.921/1.044 ms 2026-04-09 01:24:36.878767 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:24:36.878774 | orchestrator | + ping -c3 192.168.112.146 2026-04-09 01:24:36.888309 | orchestrator | PING 192.168.112.146 (192.168.112.146) 56(84) bytes of data. 2026-04-09 01:24:36.888382 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=1 ttl=63 time=4.75 ms 2026-04-09 01:24:37.887654 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=2 ttl=63 time=2.05 ms 2026-04-09 01:24:38.888715 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=3 ttl=63 time=1.74 ms 2026-04-09 01:24:38.888802 | orchestrator | 2026-04-09 01:24:38.888812 | orchestrator | --- 192.168.112.146 ping statistics --- 2026-04-09 01:24:38.888820 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:24:38.888827 | orchestrator | rtt min/avg/max/mdev = 1.738/2.843/4.747/1.351 ms 2026-04-09 01:24:38.889798 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:24:38.889841 | orchestrator | + ping -c3 192.168.112.197 2026-04-09 01:24:38.901975 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-04-09 01:24:38.902069 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=7.98 ms 2026-04-09 01:24:39.897477 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.27 ms 2026-04-09 01:24:40.899269 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.76 ms 2026-04-09 01:24:40.899374 | orchestrator | 2026-04-09 01:24:40.899383 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-04-09 01:24:40.899390 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:24:40.899395 | orchestrator | rtt min/avg/max/mdev = 1.761/4.002/7.980/2.820 ms 2026-04-09 01:24:40.900143 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-09 01:24:40.900263 | orchestrator | + compute_list 2026-04-09 01:24:40.900275 | orchestrator | + osism manage compute list testbed-node-3 2026-04-09 01:24:42.513469 | orchestrator | 2026-04-09 01:24:42 | ERROR  | Unable to get ansible vault password 2026-04-09 01:24:42.513565 | orchestrator | 2026-04-09 01:24:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:24:42.513572 | orchestrator | 2026-04-09 01:24:42 | ERROR  | Dropping encrypted entries 2026-04-09 01:24:45.707333 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:24:45.707382 | orchestrator | | ID | Name | Status | 2026-04-09 01:24:45.707388 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:24:45.707393 | orchestrator | | 431475d2-712a-4ad4-8d42-2e17c68412a9 | test-3 | ACTIVE | 2026-04-09 01:24:45.707397 | orchestrator | | 301a9bf8-feb2-491f-998f-2a78fd251591 | test-2 | ACTIVE | 2026-04-09 01:24:45.707401 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:24:45.900813 | orchestrator | + osism manage compute list testbed-node-4 2026-04-09 01:24:47.268333 | orchestrator | 2026-04-09 01:24:47 | ERROR  | Unable to get ansible vault password 2026-04-09 01:24:47.268402 | orchestrator | 2026-04-09 01:24:47 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:24:47.268414 | orchestrator | 2026-04-09 01:24:47 | ERROR  | Dropping encrypted entries 2026-04-09 01:24:49.271175 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:24:49.271319 | orchestrator | | ID | Name | Status | 2026-04-09 01:24:49.271332 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:24:49.271338 | orchestrator | | ed03a7ec-6453-4595-813e-138b4e99232f | test-4 | ACTIVE | 2026-04-09 01:24:49.271344 | orchestrator | | 58e660c9-45ac-4eae-bbce-5a216788406f | test-1 | ACTIVE | 2026-04-09 01:24:49.271350 | orchestrator | | a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 | test | ACTIVE | 2026-04-09 01:24:49.271357 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:24:49.539078 | orchestrator | + osism manage compute list testbed-node-5 2026-04-09 01:24:51.062937 | orchestrator | 2026-04-09 01:24:51 | ERROR  | Unable to get ansible vault password 2026-04-09 01:24:51.063051 | orchestrator | 2026-04-09 01:24:51 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:24:51.063064 | orchestrator | 2026-04-09 01:24:51 | ERROR  | Dropping encrypted entries 2026-04-09 01:24:52.294396 | orchestrator | +------+--------+----------+ 2026-04-09 01:24:52.294490 | orchestrator | | ID | Name | Status | 2026-04-09 01:24:52.294501 | orchestrator | |------+--------+----------| 2026-04-09 01:24:52.294506 | orchestrator | +------+--------+----------+ 2026-04-09 01:24:52.568513 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-09 01:24:54.119895 | orchestrator | 2026-04-09 01:24:54 | ERROR  | Unable to get ansible vault password 2026-04-09 01:24:54.120002 | orchestrator | 2026-04-09 01:24:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:24:54.120016 | orchestrator | 2026-04-09 01:24:54 | ERROR  | Dropping encrypted entries 2026-04-09 01:24:55.483795 | orchestrator | 2026-04-09 01:24:55 | INFO  | Live migrating server ed03a7ec-6453-4595-813e-138b4e99232f 2026-04-09 01:25:07.881435 | orchestrator | 2026-04-09 01:25:07 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:25:10.393127 | orchestrator | 2026-04-09 01:25:10 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:25:12.709489 | orchestrator | 2026-04-09 01:25:12 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:25:15.136403 | orchestrator | 2026-04-09 01:25:15 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:25:17.545349 | orchestrator | 2026-04-09 01:25:17 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:25:19.889099 | orchestrator | 2026-04-09 01:25:19 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:25:22.210287 | orchestrator | 2026-04-09 01:25:22 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:25:25.083705 | orchestrator | 2026-04-09 01:25:25 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:25:27.553353 | orchestrator | 2026-04-09 01:25:27 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) completed with status ACTIVE 2026-04-09 01:25:27.553424 | orchestrator | 2026-04-09 01:25:27 | INFO  | Live migrating server 58e660c9-45ac-4eae-bbce-5a216788406f 2026-04-09 01:25:38.786889 | orchestrator | 2026-04-09 01:25:38 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:25:41.181097 | orchestrator | 2026-04-09 01:25:41 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:25:43.428489 | orchestrator | 2026-04-09 01:25:43 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:25:45.653022 | orchestrator | 2026-04-09 01:25:45 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:25:47.943039 | orchestrator | 2026-04-09 01:25:47 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:25:50.205842 | orchestrator | 2026-04-09 01:25:50 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:25:52.478928 | orchestrator | 2026-04-09 01:25:52 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:25:54.807842 | orchestrator | 2026-04-09 01:25:54 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:25:57.131921 | orchestrator | 2026-04-09 01:25:57 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) completed with status ACTIVE 2026-04-09 01:25:57.131991 | orchestrator | 2026-04-09 01:25:57 | INFO  | Live migrating server a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 2026-04-09 01:26:07.719132 | orchestrator | 2026-04-09 01:26:07 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:26:10.048500 | orchestrator | 2026-04-09 01:26:10 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:26:12.438397 | orchestrator | 2026-04-09 01:26:12 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:26:14.692208 | orchestrator | 2026-04-09 01:26:14 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:26:16.965440 | orchestrator | 2026-04-09 01:26:16 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:26:19.631681 | orchestrator | 2026-04-09 01:26:19 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:26:21.981813 | orchestrator | 2026-04-09 01:26:21 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:26:24.209943 | orchestrator | 2026-04-09 01:26:24 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:26:26.568414 | orchestrator | 2026-04-09 01:26:26 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:26:28.881094 | orchestrator | 2026-04-09 01:26:28 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:26:31.142321 | orchestrator | 2026-04-09 01:26:31 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) completed with status ACTIVE 2026-04-09 01:26:31.444814 | orchestrator | + compute_list 2026-04-09 01:26:31.444908 | orchestrator | + osism manage compute list testbed-node-3 2026-04-09 01:26:33.048698 | orchestrator | 2026-04-09 01:26:33 | ERROR  | Unable to get ansible vault password 2026-04-09 01:26:33.048780 | orchestrator | 2026-04-09 01:26:33 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:26:33.048790 | orchestrator | 2026-04-09 01:26:33 | ERROR  | Dropping encrypted entries 2026-04-09 01:26:34.723078 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:26:34.723165 | orchestrator | | ID | Name | Status | 2026-04-09 01:26:34.723174 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:26:34.723182 | orchestrator | | 431475d2-712a-4ad4-8d42-2e17c68412a9 | test-3 | ACTIVE | 2026-04-09 01:26:34.723189 | orchestrator | | ed03a7ec-6453-4595-813e-138b4e99232f | test-4 | ACTIVE | 2026-04-09 01:26:34.723195 | orchestrator | | 301a9bf8-feb2-491f-998f-2a78fd251591 | test-2 | ACTIVE | 2026-04-09 01:26:34.723202 | orchestrator | | 58e660c9-45ac-4eae-bbce-5a216788406f | test-1 | ACTIVE | 2026-04-09 01:26:34.723209 | orchestrator | | a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 | test | ACTIVE | 2026-04-09 01:26:34.723215 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:26:35.019644 | orchestrator | + osism manage compute list testbed-node-4 2026-04-09 01:26:36.532620 | orchestrator | 2026-04-09 01:26:36 | ERROR  | Unable to get ansible vault password 2026-04-09 01:26:36.532693 | orchestrator | 2026-04-09 01:26:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:26:36.532701 | orchestrator | 2026-04-09 01:26:36 | ERROR  | Dropping encrypted entries 2026-04-09 01:26:37.666818 | orchestrator | +------+--------+----------+ 2026-04-09 01:26:37.666893 | orchestrator | | ID | Name | Status | 2026-04-09 01:26:37.666899 | orchestrator | |------+--------+----------| 2026-04-09 01:26:37.666903 | orchestrator | +------+--------+----------+ 2026-04-09 01:26:37.956170 | orchestrator | + osism manage compute list testbed-node-5 2026-04-09 01:26:39.512514 | orchestrator | 2026-04-09 01:26:39 | ERROR  | Unable to get ansible vault password 2026-04-09 01:26:39.512603 | orchestrator | 2026-04-09 01:26:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:26:39.512614 | orchestrator | 2026-04-09 01:26:39 | ERROR  | Dropping encrypted entries 2026-04-09 01:26:40.736329 | orchestrator | +------+--------+----------+ 2026-04-09 01:26:40.736407 | orchestrator | | ID | Name | Status | 2026-04-09 01:26:40.736413 | orchestrator | |------+--------+----------| 2026-04-09 01:26:40.736418 | orchestrator | +------+--------+----------+ 2026-04-09 01:26:41.023986 | orchestrator | + server_ping 2026-04-09 01:26:41.025418 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 01:26:41.025460 | orchestrator | ++ tr -d '\r' 2026-04-09 01:26:43.749248 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:26:43.749327 | orchestrator | + ping -c3 192.168.112.177 2026-04-09 01:26:43.758145 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2026-04-09 01:26:43.758193 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=6.44 ms 2026-04-09 01:26:44.756051 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.63 ms 2026-04-09 01:26:45.757801 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=1.73 ms 2026-04-09 01:26:45.757877 | orchestrator | 2026-04-09 01:26:45.757885 | orchestrator | --- 192.168.112.177 ping statistics --- 2026-04-09 01:26:45.757891 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:26:45.757895 | orchestrator | rtt min/avg/max/mdev = 1.731/3.598/6.438/2.040 ms 2026-04-09 01:26:45.757901 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:26:45.757906 | orchestrator | + ping -c3 192.168.112.162 2026-04-09 01:26:45.770741 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-04-09 01:26:45.770847 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=7.40 ms 2026-04-09 01:26:46.767187 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=3.15 ms 2026-04-09 01:26:47.766388 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.82 ms 2026-04-09 01:26:47.766488 | orchestrator | 2026-04-09 01:26:47.766500 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-04-09 01:26:47.766510 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-04-09 01:26:47.766517 | orchestrator | rtt min/avg/max/mdev = 1.821/4.123/7.404/2.381 ms 2026-04-09 01:26:47.766728 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:26:47.766747 | orchestrator | + ping -c3 192.168.112.193 2026-04-09 01:26:47.778990 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-09 01:26:47.779067 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=7.32 ms 2026-04-09 01:26:48.775341 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.17 ms 2026-04-09 01:26:49.775968 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.21 ms 2026-04-09 01:26:49.776031 | orchestrator | 2026-04-09 01:26:49.776037 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-09 01:26:49.776042 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:26:49.776047 | orchestrator | rtt min/avg/max/mdev = 1.205/3.565/7.323/2.685 ms 2026-04-09 01:26:49.776640 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:26:49.776651 | orchestrator | + ping -c3 192.168.112.146 2026-04-09 01:26:49.784932 | orchestrator | PING 192.168.112.146 (192.168.112.146) 56(84) bytes of data. 2026-04-09 01:26:49.784981 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=1 ttl=63 time=4.32 ms 2026-04-09 01:26:50.783379 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=2 ttl=63 time=1.40 ms 2026-04-09 01:26:51.785154 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=3 ttl=63 time=1.14 ms 2026-04-09 01:26:51.785700 | orchestrator | 2026-04-09 01:26:51.785722 | orchestrator | --- 192.168.112.146 ping statistics --- 2026-04-09 01:26:51.785732 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:26:51.785740 | orchestrator | rtt min/avg/max/mdev = 1.138/2.284/4.316/1.440 ms 2026-04-09 01:26:51.786691 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:26:51.786712 | orchestrator | + ping -c3 192.168.112.197 2026-04-09 01:26:51.795805 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-04-09 01:26:51.795857 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=5.25 ms 2026-04-09 01:26:52.793955 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=1.97 ms 2026-04-09 01:26:53.795146 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.56 ms 2026-04-09 01:26:53.795223 | orchestrator | 2026-04-09 01:26:53.795247 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-04-09 01:26:53.795259 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:26:53.795302 | orchestrator | rtt min/avg/max/mdev = 1.563/2.926/5.245/1.647 ms 2026-04-09 01:26:53.795790 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-09 01:26:55.374837 | orchestrator | 2026-04-09 01:26:55 | ERROR  | Unable to get ansible vault password 2026-04-09 01:26:55.374932 | orchestrator | 2026-04-09 01:26:55 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:26:55.374958 | orchestrator | 2026-04-09 01:26:55 | ERROR  | Dropping encrypted entries 2026-04-09 01:26:56.475107 | orchestrator | 2026-04-09 01:26:56 | INFO  | No migratable instances found on node testbed-node-5 2026-04-09 01:26:56.745740 | orchestrator | + compute_list 2026-04-09 01:26:56.745811 | orchestrator | + osism manage compute list testbed-node-3 2026-04-09 01:26:58.271132 | orchestrator | 2026-04-09 01:26:58 | ERROR  | Unable to get ansible vault password 2026-04-09 01:26:58.271213 | orchestrator | 2026-04-09 01:26:58 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:26:58.271223 | orchestrator | 2026-04-09 01:26:58 | ERROR  | Dropping encrypted entries 2026-04-09 01:26:59.943408 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:26:59.943462 | orchestrator | | ID | Name | Status | 2026-04-09 01:26:59.943467 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:26:59.943471 | orchestrator | | 431475d2-712a-4ad4-8d42-2e17c68412a9 | test-3 | ACTIVE | 2026-04-09 01:26:59.943475 | orchestrator | | ed03a7ec-6453-4595-813e-138b4e99232f | test-4 | ACTIVE | 2026-04-09 01:26:59.943479 | orchestrator | | 301a9bf8-feb2-491f-998f-2a78fd251591 | test-2 | ACTIVE | 2026-04-09 01:26:59.943483 | orchestrator | | 58e660c9-45ac-4eae-bbce-5a216788406f | test-1 | ACTIVE | 2026-04-09 01:26:59.943488 | orchestrator | | a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 | test | ACTIVE | 2026-04-09 01:26:59.943492 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:27:00.228843 | orchestrator | + osism manage compute list testbed-node-4 2026-04-09 01:27:01.772433 | orchestrator | 2026-04-09 01:27:01 | ERROR  | Unable to get ansible vault password 2026-04-09 01:27:01.772481 | orchestrator | 2026-04-09 01:27:01 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:27:01.772498 | orchestrator | 2026-04-09 01:27:01 | ERROR  | Dropping encrypted entries 2026-04-09 01:27:02.803206 | orchestrator | +------+--------+----------+ 2026-04-09 01:27:02.803262 | orchestrator | | ID | Name | Status | 2026-04-09 01:27:02.803295 | orchestrator | |------+--------+----------| 2026-04-09 01:27:02.803302 | orchestrator | +------+--------+----------+ 2026-04-09 01:27:03.075328 | orchestrator | + osism manage compute list testbed-node-5 2026-04-09 01:27:04.568204 | orchestrator | 2026-04-09 01:27:04 | ERROR  | Unable to get ansible vault password 2026-04-09 01:27:04.568253 | orchestrator | 2026-04-09 01:27:04 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:27:04.568261 | orchestrator | 2026-04-09 01:27:04 | ERROR  | Dropping encrypted entries 2026-04-09 01:27:05.574482 | orchestrator | +------+--------+----------+ 2026-04-09 01:27:05.574550 | orchestrator | | ID | Name | Status | 2026-04-09 01:27:05.574558 | orchestrator | |------+--------+----------| 2026-04-09 01:27:05.574565 | orchestrator | +------+--------+----------+ 2026-04-09 01:27:05.866835 | orchestrator | + server_ping 2026-04-09 01:27:05.868396 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 01:27:05.868436 | orchestrator | ++ tr -d '\r' 2026-04-09 01:27:08.563113 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:27:08.563186 | orchestrator | + ping -c3 192.168.112.177 2026-04-09 01:27:08.572741 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2026-04-09 01:27:08.572822 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=8.17 ms 2026-04-09 01:27:09.569065 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.76 ms 2026-04-09 01:27:10.568805 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=1.76 ms 2026-04-09 01:27:10.568934 | orchestrator | 2026-04-09 01:27:10.568946 | orchestrator | --- 192.168.112.177 ping statistics --- 2026-04-09 01:27:10.568954 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-09 01:27:10.568960 | orchestrator | rtt min/avg/max/mdev = 1.760/4.228/8.169/2.815 ms 2026-04-09 01:27:10.569975 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:27:10.570061 | orchestrator | + ping -c3 192.168.112.162 2026-04-09 01:27:10.584124 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-04-09 01:27:10.584215 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=9.71 ms 2026-04-09 01:27:11.578796 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.46 ms 2026-04-09 01:27:12.579704 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.78 ms 2026-04-09 01:27:12.579778 | orchestrator | 2026-04-09 01:27:12.579785 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-04-09 01:27:12.579791 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:27:12.579796 | orchestrator | rtt min/avg/max/mdev = 1.784/4.650/9.707/3.586 ms 2026-04-09 01:27:12.579802 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:27:12.579807 | orchestrator | + ping -c3 192.168.112.193 2026-04-09 01:27:12.591077 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-09 01:27:12.591162 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=7.51 ms 2026-04-09 01:27:13.587931 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.47 ms 2026-04-09 01:27:14.589146 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.60 ms 2026-04-09 01:27:14.589229 | orchestrator | 2026-04-09 01:27:14.589240 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-09 01:27:14.589248 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:27:14.589255 | orchestrator | rtt min/avg/max/mdev = 1.600/3.858/7.506/2.603 ms 2026-04-09 01:27:14.589292 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:27:14.589300 | orchestrator | + ping -c3 192.168.112.146 2026-04-09 01:27:14.600877 | orchestrator | PING 192.168.112.146 (192.168.112.146) 56(84) bytes of data. 2026-04-09 01:27:14.600980 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=1 ttl=63 time=6.76 ms 2026-04-09 01:27:15.597611 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=2 ttl=63 time=1.76 ms 2026-04-09 01:27:16.599362 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=3 ttl=63 time=1.32 ms 2026-04-09 01:27:16.599424 | orchestrator | 2026-04-09 01:27:16.599435 | orchestrator | --- 192.168.112.146 ping statistics --- 2026-04-09 01:27:16.599443 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-09 01:27:16.599451 | orchestrator | rtt min/avg/max/mdev = 1.317/3.280/6.764/2.469 ms 2026-04-09 01:27:16.600391 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:27:16.600423 | orchestrator | + ping -c3 192.168.112.197 2026-04-09 01:27:16.608600 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-04-09 01:27:16.608646 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=4.25 ms 2026-04-09 01:27:17.608821 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.73 ms 2026-04-09 01:27:18.610195 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.97 ms 2026-04-09 01:27:18.610358 | orchestrator | 2026-04-09 01:27:18.610371 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-04-09 01:27:18.610399 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:27:18.610407 | orchestrator | rtt min/avg/max/mdev = 1.967/2.983/4.253/0.950 ms 2026-04-09 01:27:18.610517 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-09 01:27:20.172556 | orchestrator | 2026-04-09 01:27:20 | ERROR  | Unable to get ansible vault password 2026-04-09 01:27:20.172635 | orchestrator | 2026-04-09 01:27:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:27:20.172674 | orchestrator | 2026-04-09 01:27:20 | ERROR  | Dropping encrypted entries 2026-04-09 01:27:21.893391 | orchestrator | 2026-04-09 01:27:21 | INFO  | Live migrating server 431475d2-712a-4ad4-8d42-2e17c68412a9 2026-04-09 01:27:33.846780 | orchestrator | 2026-04-09 01:27:33 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:27:36.242740 | orchestrator | 2026-04-09 01:27:36 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:27:38.656214 | orchestrator | 2026-04-09 01:27:38 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:27:41.087725 | orchestrator | 2026-04-09 01:27:41 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:27:43.368815 | orchestrator | 2026-04-09 01:27:43 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:27:45.622867 | orchestrator | 2026-04-09 01:27:45 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:27:47.979791 | orchestrator | 2026-04-09 01:27:47 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:27:50.262195 | orchestrator | 2026-04-09 01:27:50 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:27:52.554976 | orchestrator | 2026-04-09 01:27:52 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) completed with status ACTIVE 2026-04-09 01:27:52.555071 | orchestrator | 2026-04-09 01:27:52 | INFO  | Live migrating server ed03a7ec-6453-4595-813e-138b4e99232f 2026-04-09 01:28:04.477625 | orchestrator | 2026-04-09 01:28:04 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:28:06.798467 | orchestrator | 2026-04-09 01:28:06 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:28:09.050276 | orchestrator | 2026-04-09 01:28:09 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:28:11.283583 | orchestrator | 2026-04-09 01:28:11 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:28:13.609040 | orchestrator | 2026-04-09 01:28:13 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:28:15.916994 | orchestrator | 2026-04-09 01:28:15 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:28:18.203702 | orchestrator | 2026-04-09 01:28:18 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:28:20.748677 | orchestrator | 2026-04-09 01:28:20 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:28:23.159153 | orchestrator | 2026-04-09 01:28:23 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) completed with status ACTIVE 2026-04-09 01:28:23.159255 | orchestrator | 2026-04-09 01:28:23 | INFO  | Live migrating server 301a9bf8-feb2-491f-998f-2a78fd251591 2026-04-09 01:28:35.291164 | orchestrator | 2026-04-09 01:28:35 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:28:37.530801 | orchestrator | 2026-04-09 01:28:37 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:28:39.901400 | orchestrator | 2026-04-09 01:28:39 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:28:42.333874 | orchestrator | 2026-04-09 01:28:42 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:28:44.625971 | orchestrator | 2026-04-09 01:28:44 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:28:46.882476 | orchestrator | 2026-04-09 01:28:46 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:28:49.156563 | orchestrator | 2026-04-09 01:28:49 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:28:51.425582 | orchestrator | 2026-04-09 01:28:51 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:28:53.702167 | orchestrator | 2026-04-09 01:28:53 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:28:56.073487 | orchestrator | 2026-04-09 01:28:56 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) completed with status ACTIVE 2026-04-09 01:28:56.073579 | orchestrator | 2026-04-09 01:28:56 | INFO  | Live migrating server 58e660c9-45ac-4eae-bbce-5a216788406f 2026-04-09 01:29:06.988926 | orchestrator | 2026-04-09 01:29:06 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:29:09.313462 | orchestrator | 2026-04-09 01:29:09 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:29:11.671453 | orchestrator | 2026-04-09 01:29:11 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:29:14.083990 | orchestrator | 2026-04-09 01:29:14 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:29:16.290110 | orchestrator | 2026-04-09 01:29:16 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:29:18.578951 | orchestrator | 2026-04-09 01:29:18 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:29:20.861478 | orchestrator | 2026-04-09 01:29:20 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:29:23.225690 | orchestrator | 2026-04-09 01:29:23 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:29:25.551545 | orchestrator | 2026-04-09 01:29:25 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) completed with status ACTIVE 2026-04-09 01:29:25.551634 | orchestrator | 2026-04-09 01:29:25 | INFO  | Live migrating server a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 2026-04-09 01:29:35.683581 | orchestrator | 2026-04-09 01:29:35 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:29:37.990273 | orchestrator | 2026-04-09 01:29:37 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:29:40.380700 | orchestrator | 2026-04-09 01:29:40 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:29:42.755741 | orchestrator | 2026-04-09 01:29:42 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:29:45.029396 | orchestrator | 2026-04-09 01:29:45 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:29:47.275899 | orchestrator | 2026-04-09 01:29:47 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:29:49.559981 | orchestrator | 2026-04-09 01:29:49 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:29:51.848132 | orchestrator | 2026-04-09 01:29:51 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:29:54.082719 | orchestrator | 2026-04-09 01:29:54 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:29:56.289693 | orchestrator | 2026-04-09 01:29:56 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:29:58.607573 | orchestrator | 2026-04-09 01:29:58 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) completed with status ACTIVE 2026-04-09 01:29:58.886347 | orchestrator | + compute_list 2026-04-09 01:29:58.886419 | orchestrator | + osism manage compute list testbed-node-3 2026-04-09 01:30:00.486553 | orchestrator | 2026-04-09 01:30:00 | ERROR  | Unable to get ansible vault password 2026-04-09 01:30:00.486676 | orchestrator | 2026-04-09 01:30:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:30:00.486690 | orchestrator | 2026-04-09 01:30:00 | ERROR  | Dropping encrypted entries 2026-04-09 01:30:01.708696 | orchestrator | +------+--------+----------+ 2026-04-09 01:30:01.708772 | orchestrator | | ID | Name | Status | 2026-04-09 01:30:01.708779 | orchestrator | |------+--------+----------| 2026-04-09 01:30:01.708784 | orchestrator | +------+--------+----------+ 2026-04-09 01:30:02.000937 | orchestrator | + osism manage compute list testbed-node-4 2026-04-09 01:30:03.617756 | orchestrator | 2026-04-09 01:30:03 | ERROR  | Unable to get ansible vault password 2026-04-09 01:30:03.617850 | orchestrator | 2026-04-09 01:30:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:30:03.618552 | orchestrator | 2026-04-09 01:30:03 | ERROR  | Dropping encrypted entries 2026-04-09 01:30:04.973802 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:30:04.973851 | orchestrator | | ID | Name | Status | 2026-04-09 01:30:04.973856 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:30:04.973861 | orchestrator | | 431475d2-712a-4ad4-8d42-2e17c68412a9 | test-3 | ACTIVE | 2026-04-09 01:30:04.973865 | orchestrator | | ed03a7ec-6453-4595-813e-138b4e99232f | test-4 | ACTIVE | 2026-04-09 01:30:04.973872 | orchestrator | | 301a9bf8-feb2-491f-998f-2a78fd251591 | test-2 | ACTIVE | 2026-04-09 01:30:04.973879 | orchestrator | | 58e660c9-45ac-4eae-bbce-5a216788406f | test-1 | ACTIVE | 2026-04-09 01:30:04.973885 | orchestrator | | a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 | test | ACTIVE | 2026-04-09 01:30:04.973892 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:30:05.257086 | orchestrator | + osism manage compute list testbed-node-5 2026-04-09 01:30:06.811574 | orchestrator | 2026-04-09 01:30:06 | ERROR  | Unable to get ansible vault password 2026-04-09 01:30:06.811650 | orchestrator | 2026-04-09 01:30:06 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:30:06.811658 | orchestrator | 2026-04-09 01:30:06 | ERROR  | Dropping encrypted entries 2026-04-09 01:30:07.922965 | orchestrator | +------+--------+----------+ 2026-04-09 01:30:07.923056 | orchestrator | | ID | Name | Status | 2026-04-09 01:30:07.923066 | orchestrator | |------+--------+----------| 2026-04-09 01:30:07.923072 | orchestrator | +------+--------+----------+ 2026-04-09 01:30:08.252674 | orchestrator | + server_ping 2026-04-09 01:30:08.252885 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 01:30:08.253127 | orchestrator | ++ tr -d '\r' 2026-04-09 01:30:10.974482 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:30:10.974574 | orchestrator | + ping -c3 192.168.112.177 2026-04-09 01:30:10.983277 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2026-04-09 01:30:10.983378 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=6.83 ms 2026-04-09 01:30:11.979836 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=1.69 ms 2026-04-09 01:30:12.980926 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=1.67 ms 2026-04-09 01:30:12.981007 | orchestrator | 2026-04-09 01:30:12.981017 | orchestrator | --- 192.168.112.177 ping statistics --- 2026-04-09 01:30:12.981025 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-09 01:30:12.981032 | orchestrator | rtt min/avg/max/mdev = 1.665/3.392/6.827/2.428 ms 2026-04-09 01:30:12.981040 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:30:12.981046 | orchestrator | + ping -c3 192.168.112.162 2026-04-09 01:30:12.992763 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-04-09 01:30:12.992857 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=8.20 ms 2026-04-09 01:30:13.988400 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.24 ms 2026-04-09 01:30:14.989089 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.62 ms 2026-04-09 01:30:14.989178 | orchestrator | 2026-04-09 01:30:14.989188 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-04-09 01:30:14.989196 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:30:14.989203 | orchestrator | rtt min/avg/max/mdev = 1.616/4.017/8.201/2.969 ms 2026-04-09 01:30:14.989743 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:30:14.989771 | orchestrator | + ping -c3 192.168.112.193 2026-04-09 01:30:14.999114 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-09 01:30:14.999188 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=4.71 ms 2026-04-09 01:30:15.998487 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.12 ms 2026-04-09 01:30:16.999580 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.53 ms 2026-04-09 01:30:16.999662 | orchestrator | 2026-04-09 01:30:16.999669 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-09 01:30:16.999675 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:30:16.999680 | orchestrator | rtt min/avg/max/mdev = 1.527/2.785/4.714/1.384 ms 2026-04-09 01:30:16.999685 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:30:16.999690 | orchestrator | + ping -c3 192.168.112.146 2026-04-09 01:30:17.011373 | orchestrator | PING 192.168.112.146 (192.168.112.146) 56(84) bytes of data. 2026-04-09 01:30:17.011456 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=1 ttl=63 time=7.01 ms 2026-04-09 01:30:18.008255 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=2 ttl=63 time=2.45 ms 2026-04-09 01:30:19.010522 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=3 ttl=63 time=1.83 ms 2026-04-09 01:30:19.010610 | orchestrator | 2026-04-09 01:30:19.010640 | orchestrator | --- 192.168.112.146 ping statistics --- 2026-04-09 01:30:19.010671 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:30:19.010693 | orchestrator | rtt min/avg/max/mdev = 1.831/3.763/7.012/2.310 ms 2026-04-09 01:30:19.010701 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:30:19.010708 | orchestrator | + ping -c3 192.168.112.197 2026-04-09 01:30:19.021700 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-04-09 01:30:19.021793 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=7.28 ms 2026-04-09 01:30:20.018511 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.46 ms 2026-04-09 01:30:21.019807 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.36 ms 2026-04-09 01:30:21.019874 | orchestrator | 2026-04-09 01:30:21.019889 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-04-09 01:30:21.019899 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:30:21.019926 | orchestrator | rtt min/avg/max/mdev = 1.362/3.699/7.275/2.568 ms 2026-04-09 01:30:21.019937 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-09 01:30:22.572009 | orchestrator | 2026-04-09 01:30:22 | ERROR  | Unable to get ansible vault password 2026-04-09 01:30:22.572078 | orchestrator | 2026-04-09 01:30:22 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:30:22.572085 | orchestrator | 2026-04-09 01:30:22 | ERROR  | Dropping encrypted entries 2026-04-09 01:30:24.179663 | orchestrator | 2026-04-09 01:30:24 | INFO  | Live migrating server 431475d2-712a-4ad4-8d42-2e17c68412a9 2026-04-09 01:30:34.422763 | orchestrator | 2026-04-09 01:30:34 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:30:36.801535 | orchestrator | 2026-04-09 01:30:36 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:30:39.170498 | orchestrator | 2026-04-09 01:30:39 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:30:41.508708 | orchestrator | 2026-04-09 01:30:41 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:30:43.839276 | orchestrator | 2026-04-09 01:30:43 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:30:46.181273 | orchestrator | 2026-04-09 01:30:46 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:30:48.384568 | orchestrator | 2026-04-09 01:30:48 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:30:50.728584 | orchestrator | 2026-04-09 01:30:50 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:30:53.282719 | orchestrator | 2026-04-09 01:30:53 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:30:55.639489 | orchestrator | 2026-04-09 01:30:55 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:30:57.933104 | orchestrator | 2026-04-09 01:30:57 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) is still in progress 2026-04-09 01:31:00.273502 | orchestrator | 2026-04-09 01:31:00 | INFO  | Live migration of 431475d2-712a-4ad4-8d42-2e17c68412a9 (test-3) completed with status ACTIVE 2026-04-09 01:31:00.273593 | orchestrator | 2026-04-09 01:31:00 | INFO  | Live migrating server ed03a7ec-6453-4595-813e-138b4e99232f 2026-04-09 01:31:10.046421 | orchestrator | 2026-04-09 01:31:10 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:31:12.361394 | orchestrator | 2026-04-09 01:31:12 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:31:14.695007 | orchestrator | 2026-04-09 01:31:14 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:31:16.976780 | orchestrator | 2026-04-09 01:31:16 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:31:19.293423 | orchestrator | 2026-04-09 01:31:19 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:31:21.558007 | orchestrator | 2026-04-09 01:31:21 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:31:23.849489 | orchestrator | 2026-04-09 01:31:23 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:31:26.107826 | orchestrator | 2026-04-09 01:31:26 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) is still in progress 2026-04-09 01:31:28.374459 | orchestrator | 2026-04-09 01:31:28 | INFO  | Live migration of ed03a7ec-6453-4595-813e-138b4e99232f (test-4) completed with status ACTIVE 2026-04-09 01:31:28.374534 | orchestrator | 2026-04-09 01:31:28 | INFO  | Live migrating server 301a9bf8-feb2-491f-998f-2a78fd251591 2026-04-09 01:31:38.550282 | orchestrator | 2026-04-09 01:31:38 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:31:40.934383 | orchestrator | 2026-04-09 01:31:40 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:31:43.397974 | orchestrator | 2026-04-09 01:31:43 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:31:45.589936 | orchestrator | 2026-04-09 01:31:45 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:31:47.797559 | orchestrator | 2026-04-09 01:31:47 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:31:50.080277 | orchestrator | 2026-04-09 01:31:50 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:31:52.352484 | orchestrator | 2026-04-09 01:31:52 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:31:54.623684 | orchestrator | 2026-04-09 01:31:54 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) is still in progress 2026-04-09 01:31:56.977717 | orchestrator | 2026-04-09 01:31:56 | INFO  | Live migration of 301a9bf8-feb2-491f-998f-2a78fd251591 (test-2) completed with status ACTIVE 2026-04-09 01:31:56.977826 | orchestrator | 2026-04-09 01:31:56 | INFO  | Live migrating server 58e660c9-45ac-4eae-bbce-5a216788406f 2026-04-09 01:32:06.201844 | orchestrator | 2026-04-09 01:32:06 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:32:08.546527 | orchestrator | 2026-04-09 01:32:08 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:32:10.830283 | orchestrator | 2026-04-09 01:32:10 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:32:13.074558 | orchestrator | 2026-04-09 01:32:13 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:32:15.474279 | orchestrator | 2026-04-09 01:32:15 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:32:17.823368 | orchestrator | 2026-04-09 01:32:17 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:32:20.036890 | orchestrator | 2026-04-09 01:32:20 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:32:22.372539 | orchestrator | 2026-04-09 01:32:22 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) is still in progress 2026-04-09 01:32:24.643278 | orchestrator | 2026-04-09 01:32:24 | INFO  | Live migration of 58e660c9-45ac-4eae-bbce-5a216788406f (test-1) completed with status ACTIVE 2026-04-09 01:32:24.643368 | orchestrator | 2026-04-09 01:32:24 | INFO  | Live migrating server a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 2026-04-09 01:32:34.727541 | orchestrator | 2026-04-09 01:32:34 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:32:37.082828 | orchestrator | 2026-04-09 01:32:37 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:32:39.454163 | orchestrator | 2026-04-09 01:32:39 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:32:41.751365 | orchestrator | 2026-04-09 01:32:41 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:32:44.205263 | orchestrator | 2026-04-09 01:32:44 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:32:46.577485 | orchestrator | 2026-04-09 01:32:46 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:32:48.985512 | orchestrator | 2026-04-09 01:32:48 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:32:51.198190 | orchestrator | 2026-04-09 01:32:51 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:32:53.428837 | orchestrator | 2026-04-09 01:32:53 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) is still in progress 2026-04-09 01:32:55.694004 | orchestrator | 2026-04-09 01:32:55 | INFO  | Live migration of a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 (test) completed with status ACTIVE 2026-04-09 01:32:56.003164 | orchestrator | + compute_list 2026-04-09 01:32:56.003216 | orchestrator | + osism manage compute list testbed-node-3 2026-04-09 01:32:57.516636 | orchestrator | 2026-04-09 01:32:57 | ERROR  | Unable to get ansible vault password 2026-04-09 01:32:57.516731 | orchestrator | 2026-04-09 01:32:57 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:32:57.517434 | orchestrator | 2026-04-09 01:32:57 | ERROR  | Dropping encrypted entries 2026-04-09 01:32:58.572008 | orchestrator | +------+--------+----------+ 2026-04-09 01:32:58.572072 | orchestrator | | ID | Name | Status | 2026-04-09 01:32:58.572081 | orchestrator | |------+--------+----------| 2026-04-09 01:32:58.572087 | orchestrator | +------+--------+----------+ 2026-04-09 01:32:58.852571 | orchestrator | + osism manage compute list testbed-node-4 2026-04-09 01:33:00.335756 | orchestrator | 2026-04-09 01:33:00 | ERROR  | Unable to get ansible vault password 2026-04-09 01:33:00.335827 | orchestrator | 2026-04-09 01:33:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:33:00.335835 | orchestrator | 2026-04-09 01:33:00 | ERROR  | Dropping encrypted entries 2026-04-09 01:33:01.432799 | orchestrator | +------+--------+----------+ 2026-04-09 01:33:01.432902 | orchestrator | | ID | Name | Status | 2026-04-09 01:33:01.432911 | orchestrator | |------+--------+----------| 2026-04-09 01:33:01.432916 | orchestrator | +------+--------+----------+ 2026-04-09 01:33:01.641135 | orchestrator | + osism manage compute list testbed-node-5 2026-04-09 01:33:03.167973 | orchestrator | 2026-04-09 01:33:03 | ERROR  | Unable to get ansible vault password 2026-04-09 01:33:03.168143 | orchestrator | 2026-04-09 01:33:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-09 01:33:03.168181 | orchestrator | 2026-04-09 01:33:03 | ERROR  | Dropping encrypted entries 2026-04-09 01:33:04.601802 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:33:04.601855 | orchestrator | | ID | Name | Status | 2026-04-09 01:33:04.601861 | orchestrator | |--------------------------------------+--------+----------| 2026-04-09 01:33:04.601874 | orchestrator | | 431475d2-712a-4ad4-8d42-2e17c68412a9 | test-3 | ACTIVE | 2026-04-09 01:33:04.601883 | orchestrator | | ed03a7ec-6453-4595-813e-138b4e99232f | test-4 | ACTIVE | 2026-04-09 01:33:04.601904 | orchestrator | | 301a9bf8-feb2-491f-998f-2a78fd251591 | test-2 | ACTIVE | 2026-04-09 01:33:04.601914 | orchestrator | | 58e660c9-45ac-4eae-bbce-5a216788406f | test-1 | ACTIVE | 2026-04-09 01:33:04.601920 | orchestrator | | a299465f-f4f0-4c3a-aa6d-b36ae58d4ec8 | test | ACTIVE | 2026-04-09 01:33:04.601926 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-09 01:33:04.915656 | orchestrator | + server_ping 2026-04-09 01:33:04.917556 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-09 01:33:04.917598 | orchestrator | ++ tr -d '\r' 2026-04-09 01:33:07.715141 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:33:07.715208 | orchestrator | + ping -c3 192.168.112.177 2026-04-09 01:33:07.723084 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2026-04-09 01:33:07.723147 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=4.05 ms 2026-04-09 01:33:08.721654 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=1.70 ms 2026-04-09 01:33:09.724317 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=2.08 ms 2026-04-09 01:33:09.724387 | orchestrator | 2026-04-09 01:33:09.724394 | orchestrator | --- 192.168.112.177 ping statistics --- 2026-04-09 01:33:09.724400 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:33:09.724404 | orchestrator | rtt min/avg/max/mdev = 1.698/2.608/4.049/1.030 ms 2026-04-09 01:33:09.725373 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:33:09.725419 | orchestrator | + ping -c3 192.168.112.162 2026-04-09 01:33:09.735720 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-04-09 01:33:09.735813 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=5.86 ms 2026-04-09 01:33:10.733116 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.11 ms 2026-04-09 01:33:11.734226 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.55 ms 2026-04-09 01:33:11.734396 | orchestrator | 2026-04-09 01:33:11.734409 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-04-09 01:33:11.734415 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:33:11.734420 | orchestrator | rtt min/avg/max/mdev = 1.551/3.174/5.860/1.912 ms 2026-04-09 01:33:11.735646 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:33:11.735684 | orchestrator | + ping -c3 192.168.112.193 2026-04-09 01:33:11.744764 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-09 01:33:11.744837 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=4.74 ms 2026-04-09 01:33:12.744541 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.21 ms 2026-04-09 01:33:13.745789 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.56 ms 2026-04-09 01:33:13.745889 | orchestrator | 2026-04-09 01:33:13.745899 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-09 01:33:13.745907 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-09 01:33:13.745914 | orchestrator | rtt min/avg/max/mdev = 1.556/2.833/4.739/1.373 ms 2026-04-09 01:33:13.745922 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:33:13.745930 | orchestrator | + ping -c3 192.168.112.146 2026-04-09 01:33:13.754853 | orchestrator | PING 192.168.112.146 (192.168.112.146) 56(84) bytes of data. 2026-04-09 01:33:13.754922 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=1 ttl=63 time=4.70 ms 2026-04-09 01:33:14.754596 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=2 ttl=63 time=2.29 ms 2026-04-09 01:33:15.754633 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=3 ttl=63 time=1.06 ms 2026-04-09 01:33:15.754698 | orchestrator | 2026-04-09 01:33:15.754706 | orchestrator | --- 192.168.112.146 ping statistics --- 2026-04-09 01:33:15.754711 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:33:15.754715 | orchestrator | rtt min/avg/max/mdev = 1.057/2.681/4.697/1.511 ms 2026-04-09 01:33:15.754719 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-09 01:33:15.754735 | orchestrator | + ping -c3 192.168.112.197 2026-04-09 01:33:15.764066 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-04-09 01:33:15.764125 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=4.94 ms 2026-04-09 01:33:16.762169 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=1.74 ms 2026-04-09 01:33:17.764485 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.52 ms 2026-04-09 01:33:17.765366 | orchestrator | 2026-04-09 01:33:17.765412 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-04-09 01:33:17.765432 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-09 01:33:17.765440 | orchestrator | rtt min/avg/max/mdev = 1.518/2.730/4.936/1.561 ms 2026-04-09 01:33:17.891801 | orchestrator | ok: Runtime: 0:17:28.240696 2026-04-09 01:33:17.942707 | 2026-04-09 01:33:17.942865 | TASK [Run tempest] 2026-04-09 01:33:18.706592 | orchestrator | + set -e 2026-04-09 01:33:18.706727 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 01:33:18.706749 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 01:33:18.706757 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 01:33:18.706765 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-09 01:33:18.706772 | orchestrator | ++ CEPH_VERSION=reef 2026-04-09 01:33:18.706780 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 01:33:18.706807 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 01:33:18.706821 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-09 01:33:18.706834 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-09 01:33:18.706855 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-09 01:33:18.706866 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-09 01:33:18.706872 | orchestrator | ++ export ARA=false 2026-04-09 01:33:18.706878 | orchestrator | ++ ARA=false 2026-04-09 01:33:18.706887 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 01:33:18.706893 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 01:33:18.706899 | orchestrator | ++ export TEMPEST=true 2026-04-09 01:33:18.706909 | orchestrator | ++ TEMPEST=true 2026-04-09 01:33:18.706916 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 01:33:18.706922 | orchestrator | ++ IS_ZUUL=true 2026-04-09 01:33:18.706940 | orchestrator | 2026-04-09 01:33:18.706945 | orchestrator | # Tempest 2026-04-09 01:33:18.706950 | orchestrator | 2026-04-09 01:33:18.706955 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 01:33:18.706960 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-04-09 01:33:18.706965 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 01:33:18.706970 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 01:33:18.706974 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 01:33:18.706979 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 01:33:18.706983 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 01:33:18.706988 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 01:33:18.706992 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 01:33:18.706997 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 01:33:18.707001 | orchestrator | + echo 2026-04-09 01:33:18.707006 | orchestrator | + echo '# Tempest' 2026-04-09 01:33:18.707010 | orchestrator | + echo 2026-04-09 01:33:18.707015 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-09 01:33:18.707019 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-09 01:33:30.131183 | orchestrator | 2026-04-09 01:33:30 | INFO  | Prepare task for execution of tempest. 2026-04-09 01:33:30.215419 | orchestrator | 2026-04-09 01:33:30 | INFO  | Task 584ced17-de55-4b4d-aaa8-0428e7ed25c4 (tempest) was prepared for execution. 2026-04-09 01:33:30.215481 | orchestrator | 2026-04-09 01:33:30 | INFO  | It takes a moment until task 584ced17-de55-4b4d-aaa8-0428e7ed25c4 (tempest) has been started and output is visible here. 2026-04-09 01:34:44.050763 | orchestrator | 2026-04-09 01:34:44.050834 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-09 01:34:44.050843 | orchestrator | 2026-04-09 01:34:44.050849 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-09 01:34:44.050861 | orchestrator | Thursday 09 April 2026 01:33:33 +0000 (0:00:00.328) 0:00:00.328 ******** 2026-04-09 01:34:44.050867 | orchestrator | changed: [testbed-manager] 2026-04-09 01:34:44.050874 | orchestrator | 2026-04-09 01:34:44.050879 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-09 01:34:44.050884 | orchestrator | Thursday 09 April 2026 01:33:34 +0000 (0:00:01.023) 0:00:01.352 ******** 2026-04-09 01:34:44.050890 | orchestrator | changed: [testbed-manager] 2026-04-09 01:34:44.050895 | orchestrator | 2026-04-09 01:34:44.050901 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-09 01:34:44.050918 | orchestrator | Thursday 09 April 2026 01:33:35 +0000 (0:00:01.193) 0:00:02.545 ******** 2026-04-09 01:34:44.050924 | orchestrator | ok: [testbed-manager] 2026-04-09 01:34:44.050930 | orchestrator | 2026-04-09 01:34:44.050936 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-09 01:34:44.050942 | orchestrator | Thursday 09 April 2026 01:33:36 +0000 (0:00:00.419) 0:00:02.964 ******** 2026-04-09 01:34:44.050947 | orchestrator | changed: [testbed-manager] 2026-04-09 01:34:44.050952 | orchestrator | 2026-04-09 01:34:44.050958 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-09 01:34:44.050963 | orchestrator | Thursday 09 April 2026 01:33:55 +0000 (0:00:18.880) 0:00:21.845 ******** 2026-04-09 01:34:44.050987 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-09 01:34:44.050993 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-09 01:34:44.051002 | orchestrator | 2026-04-09 01:34:44.051007 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-09 01:34:44.051012 | orchestrator | Thursday 09 April 2026 01:34:03 +0000 (0:00:08.226) 0:00:30.071 ******** 2026-04-09 01:34:44.051017 | orchestrator | ok: [testbed-manager] => { 2026-04-09 01:34:44.051023 | orchestrator |  "changed": false, 2026-04-09 01:34:44.051028 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:34:44.051034 | orchestrator | } 2026-04-09 01:34:44.051039 | orchestrator | 2026-04-09 01:34:44.051045 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-09 01:34:44.051050 | orchestrator | Thursday 09 April 2026 01:34:03 +0000 (0:00:00.157) 0:00:30.229 ******** 2026-04-09 01:34:44.051056 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:34:44.051061 | orchestrator | 2026-04-09 01:34:44.051066 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-09 01:34:44.051071 | orchestrator | Thursday 09 April 2026 01:34:07 +0000 (0:00:03.618) 0:00:33.847 ******** 2026-04-09 01:34:44.051076 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:34:44.051082 | orchestrator | 2026-04-09 01:34:44.051087 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-09 01:34:44.051092 | orchestrator | Thursday 09 April 2026 01:34:08 +0000 (0:00:01.870) 0:00:35.718 ******** 2026-04-09 01:34:44.051097 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:34:44.051103 | orchestrator | 2026-04-09 01:34:44.051108 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-09 01:34:44.051113 | orchestrator | Thursday 09 April 2026 01:34:12 +0000 (0:00:03.720) 0:00:39.438 ******** 2026-04-09 01:34:44.051118 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:34:44.051124 | orchestrator | 2026-04-09 01:34:44.051129 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-09 01:34:44.051134 | orchestrator | Thursday 09 April 2026 01:34:12 +0000 (0:00:00.176) 0:00:39.615 ******** 2026-04-09 01:34:44.051140 | orchestrator | changed: [testbed-manager] 2026-04-09 01:34:44.051145 | orchestrator | 2026-04-09 01:34:44.051150 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-09 01:34:44.051156 | orchestrator | Thursday 09 April 2026 01:34:15 +0000 (0:00:02.397) 0:00:42.013 ******** 2026-04-09 01:34:44.051161 | orchestrator | changed: [testbed-manager] 2026-04-09 01:34:44.051166 | orchestrator | 2026-04-09 01:34:44.051172 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-09 01:34:44.051177 | orchestrator | Thursday 09 April 2026 01:34:24 +0000 (0:00:08.850) 0:00:50.863 ******** 2026-04-09 01:34:44.051182 | orchestrator | changed: [testbed-manager] 2026-04-09 01:34:44.051187 | orchestrator | 2026-04-09 01:34:44.051193 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-09 01:34:44.051198 | orchestrator | Thursday 09 April 2026 01:34:24 +0000 (0:00:00.698) 0:00:51.562 ******** 2026-04-09 01:34:44.051203 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:34:44.051208 | orchestrator | 2026-04-09 01:34:44.051214 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-09 01:34:44.051219 | orchestrator | Thursday 09 April 2026 01:34:26 +0000 (0:00:01.499) 0:00:53.062 ******** 2026-04-09 01:34:44.051224 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:34:44.051230 | orchestrator | 2026-04-09 01:34:44.051235 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-09 01:34:44.051240 | orchestrator | Thursday 09 April 2026 01:34:27 +0000 (0:00:01.562) 0:00:54.624 ******** 2026-04-09 01:34:44.051246 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:34:44.051254 | orchestrator | 2026-04-09 01:34:44.051277 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-09 01:34:44.051298 | orchestrator | Thursday 09 April 2026 01:34:28 +0000 (0:00:00.209) 0:00:54.834 ******** 2026-04-09 01:34:44.051306 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:34:44.051313 | orchestrator | 2026-04-09 01:34:44.051327 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-09 01:34:44.051336 | orchestrator | Thursday 09 April 2026 01:34:28 +0000 (0:00:00.346) 0:00:55.180 ******** 2026-04-09 01:34:44.051344 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 01:34:44.051352 | orchestrator | 2026-04-09 01:34:44.051360 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-09 01:34:44.051381 | orchestrator | Thursday 09 April 2026 01:34:32 +0000 (0:00:03.963) 0:00:59.144 ******** 2026-04-09 01:34:44.051389 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-09 01:34:44.051397 | orchestrator |  "changed": false, 2026-04-09 01:34:44.051406 | orchestrator |  "msg": "All assertions passed" 2026-04-09 01:34:44.051414 | orchestrator | } 2026-04-09 01:34:44.051423 | orchestrator | 2026-04-09 01:34:44.051433 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-09 01:34:44.051442 | orchestrator | Thursday 09 April 2026 01:34:32 +0000 (0:00:00.183) 0:00:59.328 ******** 2026-04-09 01:34:44.051451 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-09 01:34:44.051460 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-09 01:34:44.051469 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:34:44.051478 | orchestrator | 2026-04-09 01:34:44.051486 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-09 01:34:44.051495 | orchestrator | Thursday 09 April 2026 01:34:32 +0000 (0:00:00.174) 0:00:59.502 ******** 2026-04-09 01:34:44.051504 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:34:44.051513 | orchestrator | 2026-04-09 01:34:44.051520 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-09 01:34:44.051525 | orchestrator | Thursday 09 April 2026 01:34:32 +0000 (0:00:00.157) 0:00:59.659 ******** 2026-04-09 01:34:44.051530 | orchestrator | ok: [testbed-manager] 2026-04-09 01:34:44.051536 | orchestrator | 2026-04-09 01:34:44.051541 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-09 01:34:44.051546 | orchestrator | Thursday 09 April 2026 01:34:33 +0000 (0:00:00.470) 0:01:00.130 ******** 2026-04-09 01:34:44.051555 | orchestrator | changed: [testbed-manager] 2026-04-09 01:34:44.051565 | orchestrator | 2026-04-09 01:34:44.051578 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-09 01:34:44.051586 | orchestrator | Thursday 09 April 2026 01:34:34 +0000 (0:00:00.930) 0:01:01.060 ******** 2026-04-09 01:34:44.051594 | orchestrator | ok: [testbed-manager] 2026-04-09 01:34:44.051603 | orchestrator | 2026-04-09 01:34:44.051611 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-09 01:34:44.051619 | orchestrator | Thursday 09 April 2026 01:34:34 +0000 (0:00:00.458) 0:01:01.519 ******** 2026-04-09 01:34:44.051628 | orchestrator | skipping: [testbed-manager] 2026-04-09 01:34:44.051636 | orchestrator | 2026-04-09 01:34:44.051644 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-09 01:34:44.051652 | orchestrator | Thursday 09 April 2026 01:34:35 +0000 (0:00:00.320) 0:01:01.839 ******** 2026-04-09 01:34:44.051661 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-09 01:34:44.051669 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-09 01:34:44.051678 | orchestrator | 2026-04-09 01:34:44.051687 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-09 01:34:44.051696 | orchestrator | Thursday 09 April 2026 01:34:42 +0000 (0:00:07.808) 0:01:09.648 ******** 2026-04-09 01:34:44.051701 | orchestrator | changed: [testbed-manager] 2026-04-09 01:34:44.051707 | orchestrator | 2026-04-09 01:34:44.051719 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 01:34:44.051725 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 01:34:44.051731 | orchestrator | 2026-04-09 01:34:44.051736 | orchestrator | 2026-04-09 01:34:44.051741 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 01:34:44.051746 | orchestrator | Thursday 09 April 2026 01:34:44 +0000 (0:00:01.106) 0:01:10.755 ******** 2026-04-09 01:34:44.051752 | orchestrator | =============================================================================== 2026-04-09 01:34:44.051757 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 18.88s 2026-04-09 01:34:44.051762 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.85s 2026-04-09 01:34:44.051767 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.23s 2026-04-09 01:34:44.051772 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.81s 2026-04-09 01:34:44.051783 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.96s 2026-04-09 01:34:44.051788 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.72s 2026-04-09 01:34:44.051793 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.62s 2026-04-09 01:34:44.051799 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.40s 2026-04-09 01:34:44.051804 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.87s 2026-04-09 01:34:44.051809 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.56s 2026-04-09 01:34:44.051814 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.50s 2026-04-09 01:34:44.051819 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.19s 2026-04-09 01:34:44.051838 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.11s 2026-04-09 01:34:44.051844 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.02s 2026-04-09 01:34:44.051849 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.93s 2026-04-09 01:34:44.051854 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.70s 2026-04-09 01:34:44.051859 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.47s 2026-04-09 01:34:44.051872 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.46s 2026-04-09 01:34:44.284910 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.42s 2026-04-09 01:34:44.284993 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.35s 2026-04-09 01:34:44.460230 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-09 01:34:44.463172 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-09 01:34:44.468003 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-09 01:34:44.468118 | orchestrator | 2026-04-09 01:34:44.468137 | orchestrator | + echo 2026-04-09 01:34:44.468147 | orchestrator | + echo '## IDENTITY (API)' 2026-04-09 01:34:44.468306 | orchestrator | ## IDENTITY (API) 2026-04-09 01:34:44.468322 | orchestrator | 2026-04-09 01:34:44.468329 | orchestrator | + echo 2026-04-09 01:34:44.468336 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-09 01:34:44.468345 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-09 01:34:44.469771 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-09 01:34:44.470121 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:34:44.472562 | orchestrator | + tee -a /opt/tempest/20260409-0134.log 2026-04-09 01:34:46.554004 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:34:46.554173 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:34:46.554186 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:34:46.554194 | orchestrator | 2026-04-09 01:34:46.554201 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:34:46.554207 | orchestrator | framework. For more detail see 2026-04-09 01:34:46.554215 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:34:46.554221 | orchestrator | 2026-04-09 01:34:46.554226 | orchestrator | __import__(import_str) 2026-04-09 01:34:48.066973 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:34:48.067067 | orchestrator | Did you mean one of these? 2026-04-09 01:34:48.067082 | orchestrator | help 2026-04-09 01:34:48.067092 | orchestrator | init 2026-04-09 01:34:48.455063 | orchestrator | 2026-04-09 01:34:48.455131 | orchestrator | ## IMAGE (API) 2026-04-09 01:34:48.455137 | orchestrator | 2026-04-09 01:34:48.455142 | orchestrator | + echo 2026-04-09 01:34:48.455146 | orchestrator | + echo '## IMAGE (API)' 2026-04-09 01:34:48.455151 | orchestrator | + echo 2026-04-09 01:34:48.455156 | orchestrator | + _tempest tempest.api.image.v2 2026-04-09 01:34:48.455161 | orchestrator | + local regex=tempest.api.image.v2 2026-04-09 01:34:48.455655 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-09 01:34:48.456316 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:34:48.459049 | orchestrator | + tee -a /opt/tempest/20260409-0134.log 2026-04-09 01:34:50.534991 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:34:50.535103 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:34:50.535112 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:34:50.535127 | orchestrator | 2026-04-09 01:34:50.535140 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:34:50.535147 | orchestrator | framework. For more detail see 2026-04-09 01:34:50.535155 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:34:50.535162 | orchestrator | 2026-04-09 01:34:50.535168 | orchestrator | __import__(import_str) 2026-04-09 01:34:52.085101 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:34:52.085219 | orchestrator | Did you mean one of these? 2026-04-09 01:34:52.085233 | orchestrator | help 2026-04-09 01:34:52.085240 | orchestrator | init 2026-04-09 01:34:52.452424 | orchestrator | 2026-04-09 01:34:52.452521 | orchestrator | ## NETWORK (API) 2026-04-09 01:34:52.452528 | orchestrator | 2026-04-09 01:34:52.452532 | orchestrator | + echo 2026-04-09 01:34:52.452537 | orchestrator | + echo '## NETWORK (API)' 2026-04-09 01:34:52.452543 | orchestrator | + echo 2026-04-09 01:34:52.452547 | orchestrator | + _tempest tempest.api.network 2026-04-09 01:34:52.452552 | orchestrator | + local regex=tempest.api.network 2026-04-09 01:34:52.452774 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-09 01:34:52.454186 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:34:52.456854 | orchestrator | + tee -a /opt/tempest/20260409-0134.log 2026-04-09 01:34:54.569442 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:34:54.569518 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:34:54.569529 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:34:54.569540 | orchestrator | 2026-04-09 01:34:54.569549 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:34:54.569579 | orchestrator | framework. For more detail see 2026-04-09 01:34:54.569589 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:34:54.569596 | orchestrator | 2026-04-09 01:34:54.569604 | orchestrator | __import__(import_str) 2026-04-09 01:34:56.149093 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:34:56.149168 | orchestrator | Did you mean one of these? 2026-04-09 01:34:56.149184 | orchestrator | help 2026-04-09 01:34:56.149196 | orchestrator | init 2026-04-09 01:34:56.534850 | orchestrator | 2026-04-09 01:34:56.534958 | orchestrator | ## VOLUME (API) 2026-04-09 01:34:56.534974 | orchestrator | 2026-04-09 01:34:56.534986 | orchestrator | + echo 2026-04-09 01:34:56.534994 | orchestrator | + echo '## VOLUME (API)' 2026-04-09 01:34:56.535005 | orchestrator | + echo 2026-04-09 01:34:56.535016 | orchestrator | + _tempest tempest.api.volume 2026-04-09 01:34:56.535028 | orchestrator | + local regex=tempest.api.volume 2026-04-09 01:34:56.535162 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-09 01:34:56.536427 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:34:56.541050 | orchestrator | + tee -a /opt/tempest/20260409-0134.log 2026-04-09 01:34:58.585352 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:34:58.585420 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:34:58.585427 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:34:58.585431 | orchestrator | 2026-04-09 01:34:58.585436 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:34:58.585441 | orchestrator | framework. For more detail see 2026-04-09 01:34:58.585448 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:34:58.585452 | orchestrator | 2026-04-09 01:34:58.585456 | orchestrator | __import__(import_str) 2026-04-09 01:35:00.119081 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:35:00.119175 | orchestrator | Did you mean one of these? 2026-04-09 01:35:00.119186 | orchestrator | help 2026-04-09 01:35:00.119193 | orchestrator | init 2026-04-09 01:35:00.492328 | orchestrator | 2026-04-09 01:35:00.492412 | orchestrator | ## COMPUTE (API) 2026-04-09 01:35:00.492422 | orchestrator | 2026-04-09 01:35:00.492429 | orchestrator | + echo 2026-04-09 01:35:00.492438 | orchestrator | + echo '## COMPUTE (API)' 2026-04-09 01:35:00.492446 | orchestrator | + echo 2026-04-09 01:35:00.492452 | orchestrator | + _tempest tempest.api.compute 2026-04-09 01:35:00.492459 | orchestrator | + local regex=tempest.api.compute 2026-04-09 01:35:00.493107 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-09 01:35:00.493138 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:35:00.494744 | orchestrator | + tee -a /opt/tempest/20260409-0135.log 2026-04-09 01:35:02.483066 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:35:02.483151 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:35:02.483162 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:35:02.483169 | orchestrator | 2026-04-09 01:35:02.483178 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:35:02.483185 | orchestrator | framework. For more detail see 2026-04-09 01:35:02.483194 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:35:02.483202 | orchestrator | 2026-04-09 01:35:02.483209 | orchestrator | __import__(import_str) 2026-04-09 01:35:04.048055 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:35:04.048169 | orchestrator | Did you mean one of these? 2026-04-09 01:35:04.048182 | orchestrator | help 2026-04-09 01:35:04.048190 | orchestrator | init 2026-04-09 01:35:04.407793 | orchestrator | 2026-04-09 01:35:04.407913 | orchestrator | ## DNS (API) 2026-04-09 01:35:04.407926 | orchestrator | 2026-04-09 01:35:04.407932 | orchestrator | + echo 2026-04-09 01:35:04.407938 | orchestrator | + echo '## DNS (API)' 2026-04-09 01:35:04.407945 | orchestrator | + echo 2026-04-09 01:35:04.407952 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-09 01:35:04.407959 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-09 01:35:04.408037 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-09 01:35:04.409118 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:35:04.411832 | orchestrator | + tee -a /opt/tempest/20260409-0135.log 2026-04-09 01:35:06.369089 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:35:06.369166 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:35:06.369175 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:35:06.369182 | orchestrator | 2026-04-09 01:35:06.369189 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:35:06.369196 | orchestrator | framework. For more detail see 2026-04-09 01:35:06.369203 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:35:06.369297 | orchestrator | 2026-04-09 01:35:06.369305 | orchestrator | __import__(import_str) 2026-04-09 01:35:07.851651 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:35:07.851737 | orchestrator | Did you mean one of these? 2026-04-09 01:35:07.851745 | orchestrator | help 2026-04-09 01:35:07.851750 | orchestrator | init 2026-04-09 01:35:08.202777 | orchestrator | 2026-04-09 01:35:08.202862 | orchestrator | ## OBJECT-STORE (API) 2026-04-09 01:35:08.202873 | orchestrator | 2026-04-09 01:35:08.202880 | orchestrator | + echo 2026-04-09 01:35:08.202888 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-09 01:35:08.202894 | orchestrator | + echo 2026-04-09 01:35:08.202898 | orchestrator | + _tempest tempest.api.object_storage 2026-04-09 01:35:08.202904 | orchestrator | + local regex=tempest.api.object_storage 2026-04-09 01:35:08.203522 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-09 01:35:08.204650 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-09 01:35:08.207718 | orchestrator | + tee -a /opt/tempest/20260409-0135.log 2026-04-09 01:35:10.151832 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-09 01:35:10.151898 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-09 01:35:10.151904 | orchestrator | we strongly recommend against using it for new projects. 2026-04-09 01:35:10.151909 | orchestrator | 2026-04-09 01:35:10.151914 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-09 01:35:10.151918 | orchestrator | framework. For more detail see 2026-04-09 01:35:10.151924 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-09 01:35:10.151928 | orchestrator | 2026-04-09 01:35:10.151932 | orchestrator | __import__(import_str) 2026-04-09 01:35:11.676446 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-09 01:35:11.676569 | orchestrator | Did you mean one of these? 2026-04-09 01:35:11.676609 | orchestrator | help 2026-04-09 01:35:11.676616 | orchestrator | init 2026-04-09 01:35:12.085901 | orchestrator | ok: Runtime: 0:01:53.710274 2026-04-09 01:35:12.101272 | 2026-04-09 01:35:12.101401 | TASK [Check prometheus alert status] 2026-04-09 01:35:12.638343 | orchestrator | skipping: Conditional result was False 2026-04-09 01:35:12.641880 | 2026-04-09 01:35:12.642061 | PLAY RECAP 2026-04-09 01:35:12.642242 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-09 01:35:12.642327 | 2026-04-09 01:35:12.861462 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-09 01:35:12.864110 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-09 01:35:13.602678 | 2026-04-09 01:35:13.602879 | PLAY [Post output play] 2026-04-09 01:35:13.619583 | 2026-04-09 01:35:13.619784 | LOOP [stage-output : Register sources] 2026-04-09 01:35:13.688952 | 2026-04-09 01:35:13.689247 | TASK [stage-output : Check sudo] 2026-04-09 01:35:14.491215 | orchestrator | sudo: a password is required 2026-04-09 01:35:14.725516 | orchestrator | ok: Runtime: 0:00:00.012348 2026-04-09 01:35:14.741206 | 2026-04-09 01:35:14.741370 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-09 01:35:14.781981 | 2026-04-09 01:35:14.782263 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-09 01:35:14.882235 | orchestrator | ok 2026-04-09 01:35:14.891883 | 2026-04-09 01:35:14.892023 | LOOP [stage-output : Ensure target folders exist] 2026-04-09 01:35:15.349779 | orchestrator | ok: "docs" 2026-04-09 01:35:15.350080 | 2026-04-09 01:35:15.609192 | orchestrator | ok: "artifacts" 2026-04-09 01:35:15.848371 | orchestrator | ok: "logs" 2026-04-09 01:35:15.871077 | 2026-04-09 01:35:15.871246 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-09 01:35:15.912376 | 2026-04-09 01:35:15.912657 | TASK [stage-output : Make all log files readable] 2026-04-09 01:35:16.211581 | orchestrator | ok 2026-04-09 01:35:16.220837 | 2026-04-09 01:35:16.220973 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-09 01:35:16.257285 | orchestrator | skipping: Conditional result was False 2026-04-09 01:35:16.273455 | 2026-04-09 01:35:16.273666 | TASK [stage-output : Discover log files for compression] 2026-04-09 01:35:16.298482 | orchestrator | skipping: Conditional result was False 2026-04-09 01:35:16.313203 | 2026-04-09 01:35:16.313361 | LOOP [stage-output : Archive everything from logs] 2026-04-09 01:35:16.360052 | 2026-04-09 01:35:16.360241 | PLAY [Post cleanup play] 2026-04-09 01:35:16.369354 | 2026-04-09 01:35:16.369511 | TASK [Set cloud fact (Zuul deployment)] 2026-04-09 01:35:16.438218 | orchestrator | ok 2026-04-09 01:35:16.449413 | 2026-04-09 01:35:16.449568 | TASK [Set cloud fact (local deployment)] 2026-04-09 01:35:16.474372 | orchestrator | skipping: Conditional result was False 2026-04-09 01:35:16.486952 | 2026-04-09 01:35:16.487095 | TASK [Clean the cloud environment] 2026-04-09 01:35:18.663389 | orchestrator | 2026-04-09 01:35:18 - clean up servers 2026-04-09 01:35:19.432019 | orchestrator | 2026-04-09 01:35:19 - testbed-manager 2026-04-09 01:35:19.518350 | orchestrator | 2026-04-09 01:35:19 - testbed-node-0 2026-04-09 01:35:19.602970 | orchestrator | 2026-04-09 01:35:19 - testbed-node-4 2026-04-09 01:35:19.688215 | orchestrator | 2026-04-09 01:35:19 - testbed-node-3 2026-04-09 01:35:19.767653 | orchestrator | 2026-04-09 01:35:19 - testbed-node-5 2026-04-09 01:35:19.859195 | orchestrator | 2026-04-09 01:35:19 - testbed-node-2 2026-04-09 01:35:19.952322 | orchestrator | 2026-04-09 01:35:19 - testbed-node-1 2026-04-09 01:35:20.043819 | orchestrator | 2026-04-09 01:35:20 - clean up keypairs 2026-04-09 01:35:20.062002 | orchestrator | 2026-04-09 01:35:20 - testbed 2026-04-09 01:35:20.084848 | orchestrator | 2026-04-09 01:35:20 - wait for servers to be gone 2026-04-09 01:35:33.034984 | orchestrator | 2026-04-09 01:35:33 - clean up ports 2026-04-09 01:35:33.221540 | orchestrator | 2026-04-09 01:35:33 - 2fcff95a-44e6-48d5-9425-3fae6ed32302 2026-04-09 01:35:33.474976 | orchestrator | 2026-04-09 01:35:33 - 78b2228e-127d-48f1-8a03-ec538d83d727 2026-04-09 01:35:33.738748 | orchestrator | 2026-04-09 01:35:33 - 7a05c10f-e8b0-4812-b2e1-1bc4a9c7d91c 2026-04-09 01:35:33.975894 | orchestrator | 2026-04-09 01:35:33 - 81688c0c-ee14-4ab3-8ef7-a5f25c128408 2026-04-09 01:35:34.175806 | orchestrator | 2026-04-09 01:35:34 - c2212e14-e675-4457-871c-76e4cfd27a17 2026-04-09 01:35:34.574125 | orchestrator | 2026-04-09 01:35:34 - e2436842-9047-4c50-af1d-25bb54c14cb0 2026-04-09 01:35:34.781988 | orchestrator | 2026-04-09 01:35:34 - f038f879-455c-42af-9541-220167f06414 2026-04-09 01:35:34.986619 | orchestrator | 2026-04-09 01:35:34 - clean up volumes 2026-04-09 01:35:35.114137 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-5-node-base 2026-04-09 01:35:35.152177 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-3-node-base 2026-04-09 01:35:35.192048 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-0-node-base 2026-04-09 01:35:35.235208 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-manager-base 2026-04-09 01:35:35.281083 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-1-node-base 2026-04-09 01:35:35.322541 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-4-node-base 2026-04-09 01:35:35.362080 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-2-node-base 2026-04-09 01:35:35.403593 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-5-node-5 2026-04-09 01:35:35.443404 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-6-node-3 2026-04-09 01:35:35.492146 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-2-node-5 2026-04-09 01:35:35.540147 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-1-node-4 2026-04-09 01:35:35.579962 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-4-node-4 2026-04-09 01:35:35.624265 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-3-node-3 2026-04-09 01:35:35.663565 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-0-node-3 2026-04-09 01:35:35.705828 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-7-node-4 2026-04-09 01:35:35.751606 | orchestrator | 2026-04-09 01:35:35 - testbed-volume-8-node-5 2026-04-09 01:35:35.788838 | orchestrator | 2026-04-09 01:35:35 - disconnect routers 2026-04-09 01:35:35.903053 | orchestrator | 2026-04-09 01:35:35 - testbed 2026-04-09 01:35:37.448140 | orchestrator | 2026-04-09 01:35:37 - clean up subnets 2026-04-09 01:35:37.503170 | orchestrator | 2026-04-09 01:35:37 - subnet-testbed-management 2026-04-09 01:35:37.688738 | orchestrator | 2026-04-09 01:35:37 - clean up networks 2026-04-09 01:35:38.310597 | orchestrator | 2026-04-09 01:35:38 - net-testbed-management 2026-04-09 01:35:38.615850 | orchestrator | 2026-04-09 01:35:38 - clean up security groups 2026-04-09 01:35:38.656024 | orchestrator | 2026-04-09 01:35:38 - testbed-node 2026-04-09 01:35:38.772897 | orchestrator | 2026-04-09 01:35:38 - testbed-management 2026-04-09 01:35:38.907953 | orchestrator | 2026-04-09 01:35:38 - clean up floating ips 2026-04-09 01:35:38.963392 | orchestrator | 2026-04-09 01:35:38 - 81.163.192.40 2026-04-09 01:35:39.331927 | orchestrator | 2026-04-09 01:35:39 - clean up routers 2026-04-09 01:35:39.443132 | orchestrator | 2026-04-09 01:35:39 - testbed 2026-04-09 01:35:41.044401 | orchestrator | ok: Runtime: 0:00:24.021179 2026-04-09 01:35:41.048784 | 2026-04-09 01:35:41.048945 | PLAY RECAP 2026-04-09 01:35:41.049067 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-09 01:35:41.049128 | 2026-04-09 01:35:41.192786 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-09 01:35:41.196443 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-09 01:35:41.959237 | 2026-04-09 01:35:41.959407 | PLAY [Cleanup play] 2026-04-09 01:35:41.975807 | 2026-04-09 01:35:41.975953 | TASK [Set cloud fact (Zuul deployment)] 2026-04-09 01:35:42.032758 | orchestrator | ok 2026-04-09 01:35:42.042417 | 2026-04-09 01:35:42.042597 | TASK [Set cloud fact (local deployment)] 2026-04-09 01:35:42.077402 | orchestrator | skipping: Conditional result was False 2026-04-09 01:35:42.094814 | 2026-04-09 01:35:42.095028 | TASK [Clean the cloud environment] 2026-04-09 01:35:43.311457 | orchestrator | 2026-04-09 01:35:43 - clean up servers 2026-04-09 01:35:43.812511 | orchestrator | 2026-04-09 01:35:43 - clean up keypairs 2026-04-09 01:35:43.829579 | orchestrator | 2026-04-09 01:35:43 - wait for servers to be gone 2026-04-09 01:35:43.868712 | orchestrator | 2026-04-09 01:35:43 - clean up ports 2026-04-09 01:35:43.949049 | orchestrator | 2026-04-09 01:35:43 - clean up volumes 2026-04-09 01:35:44.033092 | orchestrator | 2026-04-09 01:35:44 - disconnect routers 2026-04-09 01:35:44.061328 | orchestrator | 2026-04-09 01:35:44 - clean up subnets 2026-04-09 01:35:44.083729 | orchestrator | 2026-04-09 01:35:44 - clean up networks 2026-04-09 01:35:44.236312 | orchestrator | 2026-04-09 01:35:44 - clean up security groups 2026-04-09 01:35:44.269208 | orchestrator | 2026-04-09 01:35:44 - clean up floating ips 2026-04-09 01:35:44.291489 | orchestrator | 2026-04-09 01:35:44 - clean up routers 2026-04-09 01:35:44.633763 | orchestrator | ok: Runtime: 0:00:01.478159 2026-04-09 01:35:44.637747 | 2026-04-09 01:35:44.637918 | PLAY RECAP 2026-04-09 01:35:44.638026 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-09 01:35:44.638077 | 2026-04-09 01:35:44.762022 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-09 01:35:44.763993 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-09 01:35:45.581884 | 2026-04-09 01:35:45.582043 | PLAY [Base post-fetch] 2026-04-09 01:35:45.597078 | 2026-04-09 01:35:45.597205 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-09 01:35:45.652370 | orchestrator | skipping: Conditional result was False 2026-04-09 01:35:45.667653 | 2026-04-09 01:35:45.668131 | TASK [fetch-output : Set log path for single node] 2026-04-09 01:35:45.716733 | orchestrator | ok 2026-04-09 01:35:45.725633 | 2026-04-09 01:35:45.725785 | LOOP [fetch-output : Ensure local output dirs] 2026-04-09 01:35:46.213826 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/445196435b5642fdb6b68cd895968d94/work/logs" 2026-04-09 01:35:46.492015 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/445196435b5642fdb6b68cd895968d94/work/artifacts" 2026-04-09 01:35:46.750800 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/445196435b5642fdb6b68cd895968d94/work/docs" 2026-04-09 01:35:46.774513 | 2026-04-09 01:35:46.776045 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-09 01:35:47.724941 | orchestrator | changed: .d..t...... ./ 2026-04-09 01:35:47.725316 | orchestrator | changed: All items complete 2026-04-09 01:35:47.725383 | 2026-04-09 01:35:48.419643 | orchestrator | changed: .d..t...... ./ 2026-04-09 01:35:49.126068 | orchestrator | changed: .d..t...... ./ 2026-04-09 01:35:49.141235 | 2026-04-09 01:35:49.141354 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-09 01:35:49.180658 | orchestrator | skipping: Conditional result was False 2026-04-09 01:35:49.185155 | orchestrator | skipping: Conditional result was False 2026-04-09 01:35:49.201529 | 2026-04-09 01:35:49.201641 | PLAY RECAP 2026-04-09 01:35:49.201782 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-09 01:35:49.201828 | 2026-04-09 01:35:49.333395 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-09 01:35:49.334512 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-09 01:35:50.083166 | 2026-04-09 01:35:50.083349 | PLAY [Base post] 2026-04-09 01:35:50.098754 | 2026-04-09 01:35:50.098954 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-09 01:35:51.720578 | orchestrator | changed 2026-04-09 01:35:51.730997 | 2026-04-09 01:35:51.731134 | PLAY RECAP 2026-04-09 01:35:51.731208 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-09 01:35:51.731284 | 2026-04-09 01:35:51.860670 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-09 01:35:51.862779 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-09 01:35:52.647997 | 2026-04-09 01:35:52.648181 | PLAY [Base post-logs] 2026-04-09 01:35:52.658926 | 2026-04-09 01:35:52.659062 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-09 01:35:53.146003 | localhost | changed 2026-04-09 01:35:53.156116 | 2026-04-09 01:35:53.156265 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-09 01:35:53.192795 | localhost | ok 2026-04-09 01:35:53.196032 | 2026-04-09 01:35:53.196137 | TASK [Set zuul-log-path fact] 2026-04-09 01:35:53.211466 | localhost | ok 2026-04-09 01:35:53.219761 | 2026-04-09 01:35:53.219874 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-09 01:35:53.244999 | localhost | ok 2026-04-09 01:35:53.248143 | 2026-04-09 01:35:53.248250 | TASK [upload-logs : Create log directories] 2026-04-09 01:35:53.750728 | localhost | changed 2026-04-09 01:35:53.756854 | 2026-04-09 01:35:53.757049 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-09 01:35:54.312070 | localhost -> localhost | ok: Runtime: 0:00:00.006855 2026-04-09 01:35:54.316382 | 2026-04-09 01:35:54.316493 | TASK [upload-logs : Upload logs to log server] 2026-04-09 01:35:54.900739 | localhost | Output suppressed because no_log was given 2026-04-09 01:35:54.903565 | 2026-04-09 01:35:54.903731 | LOOP [upload-logs : Compress console log and json output] 2026-04-09 01:35:54.955248 | localhost | skipping: Conditional result was False 2026-04-09 01:35:54.960256 | localhost | skipping: Conditional result was False 2026-04-09 01:35:54.974600 | 2026-04-09 01:35:54.974885 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-09 01:35:55.023482 | localhost | skipping: Conditional result was False 2026-04-09 01:35:55.024163 | 2026-04-09 01:35:55.026341 | localhost | skipping: Conditional result was False 2026-04-09 01:35:55.040894 | 2026-04-09 01:35:55.041132 | LOOP [upload-logs : Upload console log and json output]